KEMBAR78
Machine Learning | PDF | Machine Learning | Applied Mathematics
0% found this document useful (0 votes)
22 views5 pages

Machine Learning

Unit 6 discusses recent trends in machine learning, focusing on advanced classification techniques beyond traditional methods. Key topics include self-supervised learning, few-shot learning, transfer learning, neural architecture search, explainable AI, federated learning, multi-modal learning, and graph neural networks, each with definitions, procedures, algorithms, applications, and examples. The document provides a comprehensive overview of how these innovations are applied in various real-world scenarios.

Uploaded by

Saurabh Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views5 pages

Machine Learning

Unit 6 discusses recent trends in machine learning, focusing on advanced classification techniques beyond traditional methods. Key topics include self-supervised learning, few-shot learning, transfer learning, neural architecture search, explainable AI, federated learning, multi-modal learning, and graph neural networks, each with definitions, procedures, algorithms, applications, and examples. The document provides a comprehensive overview of how these innovations are applied in various real-world scenarios.

Uploaded by

Saurabh Sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Unit 6 Machine Learning

Unit 6: Recent Trends in Various Learning Techniques and Classification Methods


explores the latest advancements and innovations in machine learning, especially
focusing on classification techniques that go beyond traditional supervised learning.
Below is a detailed breakdown of recent trends, including:
 Definitions
 Algorithms/Procedures
 Applications
 Real-world Examples

🟩 1. Self-Supervised Learning
🔹 Definition:
A form of unsupervised learning where models learn representations by solving pretext
tasks (e.g., predicting missing parts of input data) without requiring human-labeled data.
🔹 Procedure:
1. Design a pretext task like masked language modeling or image inpainting.
2. Train model to predict masked or missing parts.
3. Use learned representations for downstream tasks like classification.
🔹 Algorithms:
 BERT (Bidirectional Encoder Representations from Transformers) – NLP
 SimCLR, MoCo – Computer Vision
 Contrastive Predictive Coding (CPC)
🔹 Applications:
 Language understanding (e.g., translation, summarization)
 Image classification with minimal labels
 Speech recognition
✅ Example:
 Training a model to reconstruct masked words in a sentence (like in BERT), then
fine-tuning it for sentiment analysis.

🟩 2. Few-Shot / One-Shot Learning


🔹 Definition:
Learning models that can generalize from very few examples per class (few-shot) or
even one example (one-shot).
🔹 Procedure:
1. Use meta-learning or similarity-based methods.
2. Learn a metric space where similar inputs are close to each other.
3. Classify new samples based on nearest neighbors or prototypes.
🔹 Algorithms:
 Siamese Networks
 Prototypical Networks
 Matching Networks
 Meta-learning (MAML – Model-Agnostic Meta-Learning)
🔹 Applications:
 Rare disease diagnosis (limited labeled cases)
 Personalized recommendation systems
 Object detection in robotics
✅ Example:
 Recognizing a new type of malware from just one or two known samples.
🟩 3. Transfer Learning and Domain Adaptation
🔹 Definition:
Transfer learning involves using knowledge from a source domain/task to improve
performance on a target domain/task. Domain adaptation focuses on adapting models
when training and test data come from different distributions.
🔹 Procedure:
1. Pre-train on large, general dataset (e.g., ImageNet).
2. Fine-tune on smaller, specific dataset.
3. For domain adaptation, align feature distributions across domains.
🔹 Algorithms:
 Fine-tuning CNNs , Transformer-based models (e.g., BERT)
 Domain-Adversarial Neural Networks (DANN)
 CORAL (Correlation Alignment)
🔹 Applications:
 Medical imaging (train on natural images, adapt to X-rays)
 Cross-lingual NLP tasks
 Autonomous vehicles trained in simulation before real-world use
✅ Example:
 Using a pre-trained ResNet on ImageNet and fine-tuning it for plant disease
classification.

🟩 4. Neural Architecture Search (NAS)


🔹 Definition:
Automated design of neural network architectures using search algorithms to find
optimal structures for a given task.
🔹 Procedure:
1. Define a search space of possible architectures.
2. Evaluate candidate networks on validation data.
3. Use reinforcement learning, evolutionary algorithms, or gradient-based
optimization to evolve better models.
🔹 Algorithms:
 Reinforcement Learning-based NAS
 Differentiable Architecture Search (DARTS)
 Evolutionary NAS
🔹 Applications:
 Mobile-friendly models (e.g., MobileNet)
 Edge AI applications
 Custom hardware-aware model design
✅ Example:
 Google's AutoML uses NAS to design efficient vision models for mobile devices.

🟩 5. Explainable AI (XAI)
🔹 Definition:
Techniques to make ML models more interpretable and transparent, especially important
in high-stakes domains like healthcare and finance.
🔹 Procedure:
1. Apply post-hoc explanation tools to understand model decisions.
2. Visualize feature importance or decision paths.
3. Use inherently interpretable models if needed.
🔹 Algorithms:
 LIME (Local Interpretable Model-agnostic Explanations)
 SHAP (SHapley Additive exPlanations)
 Grad-CAM (for CNNs)
🔹 Applications:
 Explaining why a loan was denied
 Understanding which features drive medical diagnoses
 Debugging model bias
✅ Example:
 Using SHAP values to explain why a patient was predicted at high risk for heart
disease.

🟩 6. Federated Learning
🔹 Definition:
A distributed learning approach where models are trained across decentralized devices
holding local data, preserving privacy and reducing data transfer.
🔹 Procedure:
1. Devices train local models using their own data.
2. Models are sent to a central server.
3. Server aggregates updates (e.g., via Federated Averaging).
4. Updated global model is sent back to clients.
🔹 Algorithms:
 Federated Averaging (FedAvg)
 Secure Aggregation Protocols
 Differential Privacy in Federated Learning
🔹 Applications:
 Mobile keyboard prediction (Gboard)
 Healthcare data analysis across hospitals
 Smart home/IoT device learning
✅ Example:
 Training a voice assistant on user devices without uploading private voice
recordings.

🟩 7. Multi-modal Learning
🔹 Definition:
Combines multiple modalities (e.g., text, image, audio) into a single model to improve
understanding and classification.
🔹 Procedure:
1. Process each modality separately using specialized encoders.
2. Fuse modalities at intermediate or final layers.
3. Train end-to-end using multi-task loss or contrastive learning.
🔹 Algorithms:
 CLIP (Contrastive Language–Image Pre-training)
 Flamingo (multi-modal transformer)
 Late Fusion vs Early Fusion models
🔹 Applications:
 Caption generation for images
 Video question answering
 Assistive technologies for visually impaired users
✅ Example:
 A model that understands both visual content and spoken commands to assist in
navigation.

🟩 8. Graph Neural Networks (GNNs)


🔹 Definition:
Deep learning methods designed to operate on graph-structured data, where nodes
represent entities and edges represent relationships.
🔹 Procedure:
1. Aggregate information from neighboring nodes.
2. Update node embeddings iteratively.
3. Use final embeddings for classification or regression.
🔹 Algorithms:
 Graph Convolutional Network (GCN)
 Graph Attention Network (GAT)
 GraphSAGE
🔹 Applications:
 Social network analysis
 Drug discovery (molecular graphs)
 Recommendation systems
✅ Example:
 Detecting fraudulent transactions in a financial transaction graph.

✅ Summary Table
TREND TYPE CORE IDEA ALGORITHM(S) APPLICATION
Self- Representation Learn from BERT, SimCLR Language & Vision
Supervised Learning unlabeled data Tasks
Learning using pretext tasks
Few-Shot Generalization Learn from very ProtoNet, Siamese Rare disease diagnosis
Learning few examples Nets
Transfer Optimization Reuse knowledge Fine-tuning, DANN Medical imaging
Learning from source to
target domain
NAS Automation Automatically DARTS, Efficient model design
search best Reinforcement
architecture Learning
Explainabl Transparency Make model LIME, SHAP Financial and medical
e AI decisions decisions
interpretable
Federated Privacy- Train models FedAvg IoT, mobile health
Learning preserving ML without sharing
raw data
Multi- Integration Combine text, CLIP, Flamingo Image captioning, video
modal image, and audio QA
Learning for richer insights
GNNs Graph Modeling Learn from GCN, GAT Social networks, drug
relational data discovery
structures

You might also like