KEMBAR78
Introduction To Deep Learning | PDF | Deep Learning | Machine Learning
0% found this document useful (0 votes)
10 views14 pages

Introduction To Deep Learning

The document outlines an agenda for a deep learning presentation, covering topics such as the introduction to deep learning, key components, how it works, and its applications. It explains the differences between deep learning and traditional machine learning, the reasons for its current popularity, and various neural network types. Additionally, it provides a brief history of deep learning and highlights its applications in fields like computer vision, natural language processing, and healthcare.

Uploaded by

hccdauser162
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views14 pages

Introduction To Deep Learning

The document outlines an agenda for a deep learning presentation, covering topics such as the introduction to deep learning, key components, how it works, and its applications. It explains the differences between deep learning and traditional machine learning, the reasons for its current popularity, and various neural network types. Additionally, it provides a brief history of deep learning and highlights its applications in fields like computer vision, natural language processing, and healthcare.

Uploaded by

hccdauser162
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

TODAY AGENDA

• Introduction to Deep Learning


• Key Components
• How Deep Learning Works
• Differences Between Deep Learning and Machine
Learning
• Why Deep Learning is Popular Now?
• Key Neural Network Types
• Applications of Deep Learning
• Challenges
• Types of Neural Networks
• History of Deep Learning
• Applications of Deep Learning
Introduction to Deep Learning

• Deep Learning is a subset of Artificial Intelligence (AI) and Machine Learning


(ML) inspired by the structure of the human brain.

• It uses neural networks (logical structures mimicking the human brain) to learn
patterns from data.

Key Components:

• Neural Networks: Composed of interconnected nodes (neurons) arranged in


layers (input, hidden, output).

• Representation Learning: Automatically extracts features from raw data (no


manual feature engineering needed).
📸 Example: Your Mobile Photo App (Face Detection)

📸 Example: Your Mobile Photo App (Face Detection)


🔴 Problem:
You want your mobile app to automatically detect faces in a photo.

🔧 Without Representation Learning (Traditional Way):


You manually define features like:

• What is the skin color?


• What is the distance between eyes and nose?
• Is the shape of the face round or square?
• You have to manually think of and define each of these features (this is called feature
engineering).
⚡ With Representation Learning (e.g., Deep Learning / CNN):

You give the system a raw image (like a .jpg file), and the neural network (CNN) automatically
learns:

• Where the eyes are


• What is the pattern of the nose
• What is the overall shape of the face

📌 You don’t need to manually define anything.

The neural network learns the facial features by itself, from the training data this is called
representation learning.
How Deep Learning Works

Neural Network Layers:

• Input Layer: Receives raw data (e.g., pixels of an image).


• Hidden Layers: Extract progressively complex features (e.g., edges → shapes → objects).
• Output Layer: Provides the final prediction (e.g., "cat" or "dog").

Example:
For image classification, early layers detect edges/colors, while deeper layers identify
complex objects.
Differences Between Deep Learning and Machine Learning

Aspect Deep Learning Machine Learning

Data Dependency Requires large datasets Works well with smaller datasets

Needs GPUs (for heavy matrix


Hardware Can run efficiently on CPUs
operations)

Training Time Slower (can take days or weeks) Faster (can take minutes to hours)

Automatic (done by neural Manual (requires hand-crafted


Feature Extraction
networks) features)

"Black box" (hard to explain More interpretable (e.g., decision


Interpretability
decisions) trees, SVM)
Why Deep Learning is Popular Now?

Availability of Data:
Smartphones/Internet generate massive labeled datasets (e.g., ImageNet, YouTube-8M).

Advancements in Hardware:
GPUs/TPUs accelerate neural network training.

Frameworks & Libraries:


TensorFlow (Google) and PyTorch (Facebook) simplify model development.

Pre-trained Architectures:
Ready-to-use models (e.g., ResNet, Transformers) via transfer learning.

Community Support:
Open-source contributions and research breakthroughs.
Key Neural Network Types
• Artificial Neural Networks (ANN): Basic feedforward networks.

• Convolutional Neural Networks (CNN): For image/video data.

• Recurrent Neural Networks (RNN): For sequential data (e.g., text, speech).

• Transformers: State-of-the-art for NLP (e.g., GPT, BERT).

Applications of Deep Learning


• Computer Vision: Object detection, facial recognition.

• Natural Language Processing (NLP): Chatbots, translation.

• Healthcare: Medical image analysis, drug discovery.

• Autonomous Systems: Self-driving cars, robotics.


Key Neural Network Types
• Artificial Neural Networks (ANN): Basic feedforward networks for tasks like
regression/classification.

• Convolutional Neural Networks (CNN): Specialized for image/video processing (e.g., object
detection).

• Recurrent Neural Networks (RNN): Handle sequential data (e.g., text, speech).

• LSTM (Long Short-Term Memory): Improved RNN for long-term dependencies.

• Transformers: State-of-the-art for NLP (e.g., BERT, GPT).

• Generative Adversarial Networks (GANs): Generate synthetic data (e.g., images, music).
History of Deep Learning

1. 1958: Perceptron (Frank Rosenblatt) Early neural network model with limitations (couldn’t
solve non-linear problems).

2. 1969: Minsky & Papert exposed Perceptron’s flaws, leading to an "AI winter.“

3. 1986: Backpropagation (Rumelhart/Hinton) enabled training multi-layer networks.

4. 2010s: Resurgence due to:

• Big Data (e.g., ImageNet, social media).


• Hardware (GPUs/TPUs for parallel processing).
• Frameworks (TensorFlow, PyTorch).
• Competitions (e.g., 2012 ImageNet victory by AlexNet).
Applications of Deep Learning
• Computer Vision:
• Facial recognition (e.g., Facebook’s DeepFace).
• Image restoration (e.g., enhancing low-res photos).

• Natural Language Processing (NLP):


• Real-time translation (e.g., Google Translate).
• Text generation (e.g., GPT-3).

• Healthcare:
• Medical imaging (e.g., detecting tumors in X-rays).
• Drug discovery (e.g., AlphaFold for protein folding).

• Autonomous Systems:
• Self-driving cars (e.g., Tesla’s Autopilot).
• Robotics (e.g., warehouse automation).

• Entertainment:
Deepfake generation.
Music composition (e.g., AI-generated songs).
1. Introduction to Perceptron

Definition:

• Perceptron is the fundamental building block of artificial neural networks (ANNs).


• It’s a supervised ML algorithm used for binary classification (e.g., predicting student
placement).

Design:
• Inputs: Features (e.g., IQ, CGPA).
• Weights (w1, w2): Assign importance to inputs.
• Bias (b): Shifts the decision boundary.
• Summation: z = w1*x1 + w2*x2 + b.
• Activation Function (e.g., step function): Converts z to output (0 or 1).
How Perceptron Works

Training:
• Adjust weights (w1, w2) and bias (b) using labeled data (e.g., student placement records).
• Goal: Minimize prediction errors.

Prediction:
• For new data (e.g., IQ=100, CGPA=5.1), calculate z.

Apply activation:
• If z ≥ 0 → Output = 1 (placed).
• If z < 0 → Output = 0 (not placed).

Interpretation:
• Weights indicate feature importance (e.g., higher w2 for CGPA means CGPA is more critical
for placement).
Perceptron vs. Biological Neuron

Aspect Perceptron Biological Neuron

Simple (uses basic mathematical Highly complex (involves


Complexity
operations) electrochemical reactions)

Dynamic connections (due to


Connections Fixed weights
neuroplasticity)

Inspiration Loosely based on neuron design Original biological model

You might also like