23
Adam Coates Deep Learning and HPC Adam Coates Visiting Scholar at IU Informatics Post-doc at Stanford CS

Deep Learning and HPC

  • Upload
    hila

  • View
    72

  • Download
    0

Embed Size (px)

DESCRIPTION

Deep Learning and HPC. Adam Coates Visiting Scholar at IU Informatics Post-doc at Stanford CS. What do we want computers to do with our data?. Label: “Motorcycle” Suggest tags Image search …. Images/video Audio Text. Speech recognition Music classification Speaker identification …. - PowerPoint PPT Presentation

Citation preview

Page 1: Deep Learning and HPC

Adam Coates

Deep Learning and HPC

Adam Coates

Visiting Scholar at IU InformaticsPost-doc at Stanford CS

Page 2: Deep Learning and HPC

Adam Coates

What do we want computers to do with our data?

Images/video

Audio

Text

Label: “Motorcycle”Suggest tagsImage search…

Speech recognitionMusic classificationSpeaker identification…

Web searchAnti-spamMachine translation…

Page 3: Deep Learning and HPC

Adam Coates

Computer vision is hard!

Motorcycle

Motorcycle

Motorcycle

Motorcycle

Motorcycle Motorcycle

Motorcycle

Motorcycle

Motorcycle

Page 4: Deep Learning and HPC

Adam Coates

What do we want computers to do with our data?

Images/video

Audio

Text

Label: “Motorcycle”Suggest tagsImage search…

Speech recognitionMusic classificationSpeaker identification…

Web searchAnti-spamMachine translation…

Machine learning performs well on many of these problems, but is a lot of work.

What is it about machine learning that makes it so hard to use?

Page 5: Deep Learning and HPC

Adam Coates

Machine learning for image classification

“Motorcycle”

Page 6: Deep Learning and HPC

Adam Coates

Why is this hard?

You see this:

But the camera sees this:

Page 7: Deep Learning and HPC

Adam Coates

Machine learning and feature representations

Input

Raw image

Motorbikes“Non”-Motorbikes

Learningalgorithm

pixel 1

pixe

l 2

pixel 1

pixel 2

Page 8: Deep Learning and HPC

Adam Coates

Machine learning and feature representations

InputMotorbikes“Non”-Motorbikes

Learningalgorithm

pixel 1

pixe

l 2

pixel 1

pixel 2

Raw image

Page 9: Deep Learning and HPC

Adam Coates

Machine learning and feature representations

InputMotorbikes“Non”-Motorbikes

Learningalgorithm

pixel 1

pixe

l 2

pixel 1

pixel 2

Raw image

Page 10: Deep Learning and HPC

Adam Coates

What we want

InputMotorbikes“Non”-Motorbikes

Learningalgorithm

pixel 1

pixe

l 2

Feature representation

handlebars

wheel

E.g., Does it have Handlebars? Wheels?

Handlebars

Whe

els

Raw image Features

Page 11: Deep Learning and HPC

Adam Coates

How is computer perception done?

Image Vision features Detection

Images/video

Audio Audio features Speaker ID

Audio

Text

Text Text features

Text classification, Machine translation, Information retrieval, ....

Coming up with features is difficult, time-consuming, requires expert knowledge. When working on applications of learning, we spend a lot of time tuning the features.

Page 12: Deep Learning and HPC

Adam Coates

Deep Learning

• Find algorithms that can learn representations/features from data.– Deep neural networks.– “Unsupervised feature learning”

• Learn representations without knowing task.

Page 13: Deep Learning and HPC

Adam Coates

Deep Learning

• Build multi-stage pipelines from simple pieces.– Classic system: deep neural net.

– Generally: compositions of differentiable functions.

“Motorcycle” Optimize weights inside network to give correct answers on training data.

Page 14: Deep Learning and HPC

Adam Coates

Basic algorithmic components

• In a loop over entire training set:

1. Evaluate deep network.• Usually process a batch of training

examples (e.g., 100) at once

2. Compute gradient of loss function w.r.t parameters.• Sum up gradients over batch of

examples.

3. Update trainable parameters using gradient.

Page 15: Deep Learning and HPC

Adam Coates

Scaling Up Deep Learning at Stanford

• Most DL networks built on a few primitives.– Mostly large dense matrix/vector operations.– A few “block” matrices for widely-used cases.

– Communication hidden in distributed arrays.

• Most operations are hardware-friendly.– Not far from sgemm throughput.– Relatively low communication / IO needs.

• But hard to avoid doing many iterations.– Have to focus on making each loop very fast.

Page 16: Deep Learning and HPC

Adam Coates

Scaling Up Deep Learning at Stanford

• In-house MPI+CUDA infrastructure.– Up to 11.2B parameter networks.– Typical experiment: ~14M images (Image-Net).

1 4 9 16 36 641

10

10011.2B

6.9B

3.0B

1.9B

680M

185M

Linear

# GPUs

Fact

or S

peed

up

[Coates et al., ICML 2013]

Page 17: Deep Learning and HPC

Adam Coates

Scaling Up Deep Learning at Stanford

• Duplicated “Google Brain” with 3 machines.– Compared to 1000+ machines.– Unsupervised learning from 10M YouTube frames.

• Largest artificial neural nets ever trained.– 6.5x larger than previous system.

… but what should we do with it!?Surprisingly hard to find a problem big enough that such models matter!

[Coates et al., ICML 2013]

Page 18: Deep Learning and HPC

Adam Coates

Applications

• Building universal representations– “One neural net to rule them all.”

Object Recognition Localization Tagging Depth Estimation………

Shared representationfor many tasks.

[E.g., Collobert et al., 2011]

Page 19: Deep Learning and HPC

Adam Coates

Applications

• Autonomous Driving

1 year * 1 Hz = ~30M frames[Actually have to drive for 1 year!]

Can we train from a few hundred 1080pframes per second?

Page 20: Deep Learning and HPC

Adam Coates

Applications: why these?

• High impact.– Universal representations: many applications with diffused

value.– Driving: single application with high value.

• Train once, deploy everywhere.– Training is hard, expensive.– Deploying is easy, cheap.

– A supercomputer can generate an artifact that gets re-used by others.

Page 21: Deep Learning and HPC

Adam Coates

Things that work

• Find common cases; tightly optimize– Surprisingly few core pieces. E.g., 10.

• Distributed arrays– Massive time-saver; easy to think about.– Easy to save and restore from Lustre.– Load shards and sanity-check them in Matlab.

• High-level language bindings– Low-level code in C++/CUDA (JIT)

Page 22: Deep Learning and HPC

Adam Coates

Challenges

• Experiment turn-around time is still long.– Maybe 3-5 experiments running at once.– Weeks for big models / big datasets.

• Productivity is still much lower than, e.g., Matlab.– Lack of strong tools at every level except lowest.

• Many DL hackers are not systems hackers.

• Lots of hard-won lessons that are trapped in our group.

Page 23: Deep Learning and HPC

Adam Coates

Laundry list from Stanford infrastructure• Job control and scripting is painful

– Zombies– PBS/Torque mostly works

• JIT compilation– JIT compile C/C++ code

• Flexible enough to do many things.• Easier to use CUDA runtime, templatizing, etc.

– Avoids Driver API, which is much less convenient.• Easier to link with high-level languages.

– Needs to be thread-savvy• Caching of compiled modules• Avoiding deadlocks or locking problems in cache(s)

– Ideally invisible to users• But first use of kernels is really slow.

• Debugging– Unclear what to do here. Support for common tools? NVTX, VampirTrace…?

• Distributed arrays– Stanford implementation is rough. Should have pursued more standard approach.– MATLAB’s Co-distributed arrays; ScaLapack-style arrays.

• Multi-dimensional array with a “distributor” that maps indices to ranks.• Support to re-distribute array.• Support to save/load arrays even when process grid changes. • Distribution-aware implementations of most functionality.

• Execution structure– Imperative programming is just easier (esp. with students + scientists).

• DAGs, etc. are static and difficult to alter. Works OK for us; but many headaches.• CUDA streams+events semantics is really nice.

– Solves the same problem: hide massive parallelism from the caller.– But allows arbitrary scheduling on the fly. Easy to understand behavior as viewed by the host.

• If you want custom functionality, you just have to write the parallel code.– In CUDA, you have to write the kernel.– For ScaLapack, you had to write code on top of BLACS.

– Single-rank case should look like 100-rank case.• Students can prototype single-rank. Easier to think about.

• IO tools– We spend a lot of time writing file loaders.

• Application-specific, but lots of boiler-plate.– Many common cases in ML. E.g., a list of samples, where each sample = video, image, string, vector.

• Currently difficult to handle distributed saving/loading of large arrays of data.