Upload
altoros
View
699
Download
0
Embed Size (px)
Citation preview
Table of contents
1. Introduction : Neural networks hyperparameters
2. Tensorflow with Spark
3. Distributed Tensorflow
4. Conclusion
1
Introduction : Neural networks hyperparameters
• Neural networks are complex models : They have a lot ofhyperparameters
• It is hard to define a neural network structure/graph• Need to test several structures : execution time could be veryimportant
2
Introduction : Example of hyperparameters
- Number of layers- Number of neurons for each layer- Activation function- Learning rate- Number of iterations- dropout probability
- Number of filters- Filter’s dimensionality- Convolution step
Only for Convolutional neural networks
3
Tensorflow with Spark : efficiency
Input : Iris DataSet from scikit learn.This benchmark has been made on 45 NN for each execution.
12
Distributed Tensorflow : What is it ?
This is a new option in TensorFlow 0.8.0, it allows to :
• Run a TensorFlow graph on a cluster• Split the graph in several jobs• Jobs can contain several tasks
13
Distributed Tensorflow : How to use it ?
• We have to launch our program by command line with thecorrect arguments
• To get the arguments, we need to define some TensorFlow Flagsin the code
20
Conclusion
• Importance of including distributed programming• Spark : It exists other libraries running on spark like Theano orCaffe
• Distributed TensorFlow : Development of libraries to replicatemodels
22