KEMBAR78
Robotics: Current Topics | PDF
Robotics: Current topics
Sabbir Ahmmed
Robotics and Biology Laboratory
Promise, frustration and pessimism
Image source: https://www.slideshare.net/hyderabadscalability/geeknight-artificial-intelligence-and-machine-learning
Deep Learning - from Bust to Boom
► Until recently neural networks were all but shunned
► General AI vs Narrow AI
► Key factors that contributed to deep learning boom
• developments within neural networks and ML domain
• developments around it
Image credit: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/
Outline - key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Outline – key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Better Algorithms
► ANNs– essentially new configurations of ANNs
• CNNs,
• DNNs,
• DBNs,
• RNNs,
• LSTMs,
• GANs
• Autoencoder
► Activation functions – e.g.
• Rectifiers
► Regularization techniques
• Dropout
Better Algorithms - ANN
Bengio, Y. (2009). "Learning Deep Architectures for AI"
► Autoencoder
https://en.wikipedia.org/wiki/Autoencoder
Better Algorithms - ANN
► Autoencoder
PCA Autoencoder
Image source: https://www.cs.toronto.edu/~hinton/science.pdf
Better Algorithms - activation function
► Rectifier
• the rectifier is, as of 2015, the most popular activation
function for deep neural networks
• was first introduced to a dynamical network by Hahnloser et
al. in a 2000 paper
Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse rectifier neural networks
Better Algorithms – regularization technique
► Dropout
Srivastava et al (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting
Outline – key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Big Data
Image source: Baidu, https://devblogs.nvidia.com/parallelforall/cuda-spotlight-gpu-accelerated-deep-learning/ ,
http://adilmoujahid.com/posts/2016/06/introduction-deep-learning-python-caffe/
► Deep learning needed big data
► Big data needed deep learning
Outline – key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Large High Quality Labeled Datasets
Source: https://en.wikipedia.org/wiki/MNIST_database, www.image-net.org/
Source: https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research
Large High Quality Labeled Datasets
► MNIST - comprising a mix of handwritten digits
► A team led by Yann LeCun released the MNIST database in 1998
► Since become a benchmark for evaluating handwriting recognition.
Source: https://en.wikipedia.org/wiki/MNIST_database
Large High Quality Labeled Datasets
► ImageNet
• Started by Fei-Fei Li in 2007 (Stanford)
• One of the largest high-quality image datasets in the world
• As of 2016, over ten million URLs of images have been hand-annotated
• One million of the images, bounding boxes are also provided
• Crowdsourced the annotation process
https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures#t-1066204
Image credit: www.image-net.org/
Major Milestone (2012)
► Google Brain Project*
Le et Al (2012) - Building High-level Features Using Large Scale Unsupervised Learning
Major milestone (2012)
Image credit: https://medium.com/@johnsmart/your-personal-sim-pt-4-deep-agents-understanding-natural-intelligence-7040ae074b71
► The Google Brain project
Outline – key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Massive parallelism
Image credit: https://medium.com/@johnsmart/your-personal-sim-pt-4-deep-agents-understanding-natural-intelligence-7040ae074b71
► Even the most basic neural networks are very computationally intensive
► Many algorithms were already parallelized
► Bryan Catanzaro in NVIDIA Research teamed with Andrew Ng’s team at Stanford to use GPUs
for deep learning
► 12 NVIDIA GPUs could deliver the deep-learning performance of 2,000 CPUs
► Researchers at NYU, the University of Toronto, and the Swiss AI Lab accelerated their DNNs on
GPUs
GPU
/
Image source: NVIDIA
https://youtu.be/-P28LKWTzrI
GPU
► It’s all about scale (Baidu Research)
1 million
connections
(2007)
10 million
connections
(2008)
1 billion
connections
(2011)
100 billion
connections
(2015)
CPU GPU Cloud GPUCloud CPU
Image source: NVIDIA
GPUs: a Winning Trend
Major milestones (1998/2012)
Image Source: https://image.slidesharecdn.com/lecture29-convolutionalneuralnetworks-visionspring2015-150504114140-conversion-gate02/95/
lecture-29-convolutional-neural-networks-computer-vision-spring2015-27-638.jpg?cb=1430740006
Outline – key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Programmability & Accessibility
Open source platforms
Image credit: NVIDIA, Kaggle, Github.com, Silicon Valley Data Science (SVDS.com)
Outline – key factors
► Better NN/ML algorithms/techniques
► Big data
► Large, high quality labeled datasets
► Massive Parallelization/GPU
► Programmability & Accessibility
► Industry driven research
Industry driven research
► "At the time I joined Google, the biggest neural network
in academia was about 1 million parameters, At Google,
we were able to build something one thousand times
bigger.“ - Andrew Ng
► "I'd quite like to explore neural nets that are a thousand
times bigger than that" - Geoffrey Hinton
Source: Steve Omohundro, What’s Happening with AI? (2016) Slides
Industry driven research
► Deep Learning Race
Source: https://medium.com/intuitionmachine/the-different-ways-that-internet-giants-approach-deep-learning-research-753c9f99d9f1
Discussion – importance ranking of key factors
► Better NN/ML algorithms/techniques **
► Big data ***
► Large, high quality labeled datasets ***
► Massive Parallelization/GPU ****
► Programmability & Accessibility **
► Industry driven research **
Reference
► Bengio, Y. (2009). "Learning Deep Architectures for AI"
► Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). “Deep sparse
rectifier neural networks”
► Le et Al (2012) - Building High-level Features Using Large Scale
Unsupervised Learning
► Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse
rectifier neural networks
► Srivastava et al (2014) Dropout: A Simple Way to Prevent Neural Networks
from Overfitting
► http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf
► Alex Krizhevsky, Ilya Sutskever and Geoff Hinton (2012) ImageNet
Classification with Deep Convolutional Neural Networks
► http://cs231n.github.io/convolutional-networks/#case
► Steve Omohundro, What’s Happening with AI? (2016) Slides

Robotics: Current Topics

  • 1.
    Robotics: Current topics SabbirAhmmed Robotics and Biology Laboratory
  • 2.
    Promise, frustration andpessimism Image source: https://www.slideshare.net/hyderabadscalability/geeknight-artificial-intelligence-and-machine-learning
  • 3.
    Deep Learning -from Bust to Boom ► Until recently neural networks were all but shunned ► General AI vs Narrow AI ► Key factors that contributed to deep learning boom • developments within neural networks and ML domain • developments around it Image credit: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/
  • 4.
    Outline - keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 5.
    Outline – keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 6.
    Better Algorithms ► ANNs–essentially new configurations of ANNs • CNNs, • DNNs, • DBNs, • RNNs, • LSTMs, • GANs • Autoencoder ► Activation functions – e.g. • Rectifiers ► Regularization techniques • Dropout
  • 7.
    Better Algorithms -ANN Bengio, Y. (2009). "Learning Deep Architectures for AI" ► Autoencoder https://en.wikipedia.org/wiki/Autoencoder
  • 8.
    Better Algorithms -ANN ► Autoencoder PCA Autoencoder Image source: https://www.cs.toronto.edu/~hinton/science.pdf
  • 9.
    Better Algorithms -activation function ► Rectifier • the rectifier is, as of 2015, the most popular activation function for deep neural networks • was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse rectifier neural networks
  • 10.
    Better Algorithms –regularization technique ► Dropout Srivastava et al (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting
  • 11.
    Outline – keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 12.
    Big Data Image source:Baidu, https://devblogs.nvidia.com/parallelforall/cuda-spotlight-gpu-accelerated-deep-learning/ , http://adilmoujahid.com/posts/2016/06/introduction-deep-learning-python-caffe/ ► Deep learning needed big data ► Big data needed deep learning
  • 13.
    Outline – keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 14.
    Large High QualityLabeled Datasets Source: https://en.wikipedia.org/wiki/MNIST_database, www.image-net.org/ Source: https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research
  • 15.
    Large High QualityLabeled Datasets ► MNIST - comprising a mix of handwritten digits ► A team led by Yann LeCun released the MNIST database in 1998 ► Since become a benchmark for evaluating handwriting recognition. Source: https://en.wikipedia.org/wiki/MNIST_database
  • 16.
    Large High QualityLabeled Datasets ► ImageNet • Started by Fei-Fei Li in 2007 (Stanford) • One of the largest high-quality image datasets in the world • As of 2016, over ten million URLs of images have been hand-annotated • One million of the images, bounding boxes are also provided • Crowdsourced the annotation process https://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_understand_pictures#t-1066204 Image credit: www.image-net.org/
  • 17.
    Major Milestone (2012) ►Google Brain Project* Le et Al (2012) - Building High-level Features Using Large Scale Unsupervised Learning
  • 18.
    Major milestone (2012) Imagecredit: https://medium.com/@johnsmart/your-personal-sim-pt-4-deep-agents-understanding-natural-intelligence-7040ae074b71 ► The Google Brain project
  • 19.
    Outline – keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 20.
    Massive parallelism Image credit:https://medium.com/@johnsmart/your-personal-sim-pt-4-deep-agents-understanding-natural-intelligence-7040ae074b71 ► Even the most basic neural networks are very computationally intensive ► Many algorithms were already parallelized ► Bryan Catanzaro in NVIDIA Research teamed with Andrew Ng’s team at Stanford to use GPUs for deep learning ► 12 NVIDIA GPUs could deliver the deep-learning performance of 2,000 CPUs ► Researchers at NYU, the University of Toronto, and the Swiss AI Lab accelerated their DNNs on GPUs
  • 21.
  • 22.
    GPU ► It’s allabout scale (Baidu Research) 1 million connections (2007) 10 million connections (2008) 1 billion connections (2011) 100 billion connections (2015) CPU GPU Cloud GPUCloud CPU Image source: NVIDIA GPUs: a Winning Trend
  • 23.
    Major milestones (1998/2012) ImageSource: https://image.slidesharecdn.com/lecture29-convolutionalneuralnetworks-visionspring2015-150504114140-conversion-gate02/95/ lecture-29-convolutional-neural-networks-computer-vision-spring2015-27-638.jpg?cb=1430740006
  • 24.
    Outline – keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 25.
    Programmability & Accessibility Opensource platforms Image credit: NVIDIA, Kaggle, Github.com, Silicon Valley Data Science (SVDS.com)
  • 26.
    Outline – keyfactors ► Better NN/ML algorithms/techniques ► Big data ► Large, high quality labeled datasets ► Massive Parallelization/GPU ► Programmability & Accessibility ► Industry driven research
  • 27.
    Industry driven research ►"At the time I joined Google, the biggest neural network in academia was about 1 million parameters, At Google, we were able to build something one thousand times bigger.“ - Andrew Ng ► "I'd quite like to explore neural nets that are a thousand times bigger than that" - Geoffrey Hinton Source: Steve Omohundro, What’s Happening with AI? (2016) Slides
  • 28.
    Industry driven research ►Deep Learning Race Source: https://medium.com/intuitionmachine/the-different-ways-that-internet-giants-approach-deep-learning-research-753c9f99d9f1
  • 29.
    Discussion – importanceranking of key factors ► Better NN/ML algorithms/techniques ** ► Big data *** ► Large, high quality labeled datasets *** ► Massive Parallelization/GPU **** ► Programmability & Accessibility ** ► Industry driven research **
  • 30.
    Reference ► Bengio, Y.(2009). "Learning Deep Architectures for AI" ► Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). “Deep sparse rectifier neural networks” ► Le et Al (2012) - Building High-level Features Using Large Scale Unsupervised Learning ► Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse rectifier neural networks ► Srivastava et al (2014) Dropout: A Simple Way to Prevent Neural Networks from Overfitting ► http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf ► Alex Krizhevsky, Ilya Sutskever and Geoff Hinton (2012) ImageNet Classification with Deep Convolutional Neural Networks ► http://cs231n.github.io/convolutional-networks/#case ► Steve Omohundro, What’s Happening with AI? (2016) Slides