|
|
NVIDIA TutorialNVIDIA Tutorial on Deep Learning, July 18, 2016 (6 hours)LAAS-CNRS, room Europe, 7, avenue du Colonel Roche, Toulouse Instructors: - Introduction on GPU and Neural Networks, application to automobile industry and others; - Demos - Hands-on using qwiklab content.
Limited number of participants (40 attendees). Details: Practical Deep Learning – An Introduction Machine Learning is among the most important developments in the history of computing. Deep learning is one of the fastest growing areas of machine learning and a hot topic in both academia and industry. This workshop will cover the fundamentals of deep learning with a focus on hands-on exercises leveraging some of the most popular deep learning frameworks. Prerequisites: Experience on programming, basic knowledge of calculus, linear algebra, and probability theory. Attendees are expected to bring their own laptops for the hands-on practical work. Content: Introduction to NVIDIA software for Deep Learning Hands-on lab: Caffe frameworkHow to:• Build and train a convolutional neural network for classifying images.• Evaluate the classification performance under different training parameter configurations.• Modify the network configuration to improve classification performance.• Visualize the features that a trained network has learned.• Classify new test images using a trained network.• Training and classifying with a subset of the ImageNet dataset. Hands-on lab: Torch framework• Introduction & History• Torch core features• Why use Torch?• Torch Community and support.• The Cheatsheet• Lua (JIT) and LuaRocks• Torch’s universal data structureTensors• Creating a LeNet network• Criterion: Defining a loss function• Using dataloaders to load 50,000 CIFAR-10 (3x32x32) images• Load and normalize data• Define Neural Network• Define Loss function• Train network on training data• Test network on test data. Hands-on lab: Theano framework• Theano integration with the Python ecosystem• Data management options in Theano• DNN definition and training• Ease of extensibility of DNN functionality, e.g. defining new activation and loss functions Hands-on lab: Tensorflow• computation graph basics• linear regression• sequence autoencoder• multi-layer convolutional net• multi-GPU use• TensorBoard visualization Hands-on lab: Introduction to Recurring Neural Networks (RNNs) (Chainer)• What are RNNs?• Simple example of Binary addition• RNN training with stochastic gradient descent (SGD)• Backpropagation through time (BPTT)• Challenges such as "vanishing" and "exploding" gradients• Backprop in RNNs• Long Short Term Memory (LSTM)• RNNs and text generation using Chainer• Exercises with Gated Recurrent Units and perplexity.
|