Upload
brahim-hamadicharef
View
57.963
Download
0
Embed Size (px)
Citation preview
1
Dr HAMADI CHAREF Brahim
Non-Volatile Memory (NVM)
Data Storage Institute (DSI), A*STAR
Recent
developments
in Deep Learning
May 30, 2016
2
Deep Learning – Convolutional NNets
3
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman
Coding
Song Han, Huizi Mao, William J. Dally
International Conference on Learning Representations ICLR2016
http://arXiv.org/abs/1510.00149
Learning both Weights and Connections for Efficient Neural Networks
Song Han, Jeff Pool, John Tran, William J. Dally
Neural Information Processing Systems NIPS2015
http://arxiv.org/abs/1506.02626
EIE: Efficient Inference Engine on Compressed Deep Neural Network
Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, William J. Dally
International Symposium on Computer Architecture ISCA2016
http://arXiv.org/abs/1602.01528
SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer
Technical Report 2016
http://arXiv.org/abs/1602.07360
Recent developments in Deep Learning
4
LeNet. The first successful applications of Convolutional Networks were developed by Yann LeCun
in 1990’s. Of these, the best known is the LeNet architecture that was used to read zip codes,
digits, etc.
AlexNet. The first work that popularized Convolutional Networks in Computer Vision was
the AlexNet, developed by Alex Krizhevsky, Ilya Sutskever and Geoff Hinton. The AlexNet was
submitted to the ImageNet ILSVRC challenge in 2012 and significantly outperformed the second
runner-up (top 5 error of 16% compared to runner-up with 26% error). The Network had a very
similar architecture to LeNet, but was deeper, bigger, and featured Convolutional Layers stacked on
top of each other (previously it was common to only have a single CONV layer always immediately
followed by a POOL layer).
VGGNet. The runner-up in ILSVRC 2014 was the network from Karen Simonyan and Andrew
Zisserman that became known as the VGGNet. Its main contribution was in showing that the depth
of the network is a critical component for good performance. Their final best network contains 16
CONV/FC layers and, appealingly, features an extremely homogeneous architecture that only
performs 3x3 convolutions and 2x2 pooling from the beginning to the end. Their pretrained model is
available for plug and play use in Caffe. A downside of the VGGNet is that it is more expensive to
evaluate and uses a lot more memory and parameters (140M). Most of these parameters are in the
first fully connected layer, and it was since found that these FC layers can be removed with no
performance downgrade, significantly reducing the number of necessary parameters.
Convolutional Neural Networks (CNNs / ConvNets)
http://cs231n.github.io/convolutional-networks/
Recent developments in Deep Learning
5
Deep Learning – Paper 1
6
Deep Learning – Paper 1
1 INTRODUCTION
2 NETWORK PRUNING
3 TRAINED QUANTIZATION AND WEIGHT SHARING
3.1 WEIGHT SHARING
3.2 INITIALIZATION OF SHARED WEIGHTS
3.3 FEED-FORWARD AND BACK-PROPAGATION
4 HUFFMAN CODING
5 EXPERIMENTS
5.1 LENET-300-100 AND LENET-5 ON MNIST
5.2 ALEXNET ON IMAGENET
5.3 VGG-16 ON IMAGENET
6 DISCUSSIONS
6.1 PRUNING AND QUANTIZATION WORKING TOGETHER
6.2 CENTROID INITIALIZATION
6.3 SPEEDUP AND ENERGY EFFICIENCY
6.4 RATIO OF WEIGHTS, INDEX AND CODEBOOK
7 RELATED WORK
8 FUTURE WORK
9 CONCLUSION
7
Deep Learning – Paper 1
8
Deep Learning – Paper 1
9
Deep Learning – Paper 1
10
Deep Learning – Paper 1
THE MNIST DATABASE of handwritten digits
http://yann.lecun.com/exdb/mnist/
Visual Geometry Group (University of Oxford)
http://www.robots.ox.ac.uk/~vgg/research/very_deep/
Alex Krizhevsky https://www.cs.toronto.edu/~kriz/
The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes,
with 6000 images per class. There are 50000 training images and 10000 test images
11
Deep Learning – Paper 1
12
Deep Learning – Paper 1
13
Deep Learning – Paper 1
14
Deep Learning – Paper 1
15
Deep Learning – Paper 1
16
Deep Learning – Paper 1
17
Deep Learning – Paper 1
18
Deep Learning – Paper 1
19
Deep Learning – Paper 1
20
Deep Learning – Paper 2
NIPS2015 Review
http://media.nips.cc/nipsbooks/nipspapers/paper_files/nips28/reviews/708.html
21
Deep Learning – Paper 2
[7] Mark Horowitz. Energy table for 45nm process, Stanford VLSI wiki
Mark Horowitz Professor of Electrical Engineering and Computer Science
VLSI, Hardware, Graphics and Imaging, Applying Engineering to Biology
22
Deep Learning – Paper 2
23
Deep Learning – Paper 2
24
Deep Learning – Paper 2
25
Deep Learning – Paper 2
26
Deep Learning – Paper 2
27
Deep Learning – Paper 2
28
Deep Learning – Paper 2
29
Deep Learning – Paper 3
30
Deep Learning – Paper 3
31
Deep Learning – Paper 3
32
Deep Learning – Paper 3
33
Deep Learning – Paper 4
34
Deep Learning – Paper 3
35
Deep Learning – Paper 3
36
Deep Learning – Paper 3
37
Deep Learning – Paper 3
38
Deep Learning – Paper 3
39
Deep Learning – Paper 3
40
Deep Learning – Paper 3
41
Deep Learning – Paper 4
42
Deep Learning – Paper 4
43
Deep Learning – Paper 4
1. Introduction and Motivation
More efficient distributed training
Less overhead when exporting new models to clients
Feasible FPGA and embedded deployment
2. Related Work
2.1. Model Compression
2.2. CNN Microarchitecture
2.3. CNN Macroarchitecture
2.4. Neural Network Design Space Exploration
3. SqueezeNet: preserving accuracy with few parameters
3.1. Architectural Design Strategies
Strategy 1. Replace 3x3 filters with 1x1 filters
Strategy 2. Decrease the number of input channels to 3x3 filters
Strategy 3. Downsample late in the network so that convolution layers have large activation maps
3.2. The Fire Module
3.3. The SqueezeNet architecture
3.3.1 Other SqueezeNet details
5. CNN Microarchitecture Design Space Exploration
5.1. CNN Microarchitecture metaparameters
5.2. Squeeze Ratio
5.3. Trading off 1x1 and 3x3 filters
6. CNN Macroarchitecture Design Space Exploration
7. Model Compression Design Space Exploration
7.1. Sensitivity Analysis: Where to Prune or Add parameters
Sensitivity analysis applied to model compression
Sensitivity analysis applied to increasing accuracy
7.2. Improving Accuracy by Densifying Sparse Models
8. Conclusions
Rectified linear units improve restricted boltzmann machines.
V. Nair and G. E. Hinton. In ICML, 2010. 3
44
Deep Learning – Paper 4
45
Deep Learning – Paper 4
46
Deep Learning – Paper 4
47
Deep Learning – Paper 4
48
Deep Learning – Paper 4
49
Deep Learning – Paper 4
50
Deep Learning – Paper 4
51
Deep Learning – Paper 4