CD3291 DATA STRUCTURES AND ALGORITHMS LTPC
3 003
COURSE OBJECTIVES:
To understand the concepts of ADTs
To design linear data structures – lists, stacks, and queues
To understand sorting, searching, and hashing algorithms
To apply Tree and Graph structures
UNIT I ABSTRACT DATA TYPES 9
Abstract Data Types (ADTs) – ADTs and classes – introduction to OOP – classes in Python –
inheritance – namespaces – shallow and deep copying
Introduction to analysis of algorithms – asymptotic notations – divide & conquer – recursion –
analyzing recursive algorithms
UNIT II LINEAR STRUCTURES 9
List ADT – array-based implementations – linked list implementations – singly linked lists –
circularly linked lists – doubly linked lists – Stack ADT – Queue ADT – double ended queues –
applications
UNIT III SORTING AND SEARCHING 9
Bubble sort – selection sort – insertion sort – merge sort – quick sort – analysis of sorting
algorithms – linear search – binary search – hashing – hash functions – collision handling – load
factors, rehashing, and efficiency.
UNIT IV TREE STRUCTURES 9
Tree ADT – Binary Tree ADT – tree traversals – binary search trees – AVL trees – heaps – multi-
way search trees
UNIT V GRAPH STRUCTURES 9
Graph ADT – representations of graph – graph traversals – DAG – topological ordering – greedy
algorithms – dynamic programming – shortest paths – minimum spanning trees – introduction to
complexity classes and intractability
TOTAL: 45 PERIODS
COURSE OUTCOMES:
At the end of the course, the student should be able to:
CO1:explain abstract data types
CO2:design, implement, and analyze linear data structures, such as lists, queues, and stacks,
according to the needs of different applications
CO3:design, implement, and analyze efficient tree structures to meet requirements such as
searching, indexing, and sorting
CO4:model problems as graph problems and implement efficient graph algorithms to solve them
TEXT BOOKS:
1. Michael T. Goodrich, Roberto Tamassia, and Michael H. Goldwasser, “Data Structures &
Algorithms in Python”, An Indian Adaptation, John Wiley & Sons Inc., 2021
1
REFERENCES:
1. Lee, Kent D., Hubbard, Steve, “Data Structures and Algorithms with Python” Springer
Edition 2015
2. Rance D. Necaise, “Data Structures and Algorithms Using Python”, John Wiley & Sons,
2011
3. Aho, Hopcroft, and Ullman, “Data Structures and Algorithms”, Pearson Education, 1983.
4. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein,
“Introduction to Algorithms", Second Edition, McGraw Hill, 2002.
5. Mark Allen Weiss, “Data Structures and Algorithm Analysis in C++”, Fourth Edition,
Pearson Education, 2014
CO’s- PO’s & PSO’s MAPPING
PO’s PSO’s
CO’s
1 2 3 4 5 6 7 8 9 10 11 12 1 2
1 1 2 2 3 1 - - - 2 - 2 1 1 1
2 2 3 2 2 2 - - - 2 - 2 2 3 2
3 2 2 3 2 3 - - - 3 - 2 2 3 2
4 3 3 3 3 1 - - - 3 - 2 2 3 2
5 - - - - - - - - - - - - -
AVg. 2 3 3 3 2 - - - 3 - 2 2 3 2
1 - low, 2 - medium, 3 - high, ‘-' - no correlation
86
AL3502 DEEP LEARNING FOR VISION L TPC
3 02 4
COURSE OBJECTIVES:
To introduce basic computer vision concepts
To understand the methods and terminologies involved in deep neural network
To impart knowledge on CNN
To introduce RNN and Deep Generative model
To solve real world computer vision applications using Deep learning.
UNIT I COMPUTER VISION BASICS 9
Introduction to Image Formation, Capture and Representation; Linear Filtering, Correlation,
Convolution
Visual Features and Representations: Edge, Blobs, Corner Detection; Visual Features extraction:
Bag-of-words, VLAD; RANSAC, Hough transform.
UNIT II INTRODUCTION TO DEEP LEARNING 9
Deep Feed-Forward Neural Networks – Gradient Descent – Back-Propagation and Other
Differentiation Algorithms – Vanishing Gradient Problem – Mitigation – Rectified Linear Unit
(ReLU) – Heuristics for Avoiding Bad Local Minima – Heuristics for Faster Training – Nestors
Accelerated Gradient Descent – Regularization for Deep Learning – Dropout – Adversarial
Training – Optimization for Training Deep Models.
UNIT III VISUALIZATION AND UNDERSTANDING CNN 9
Convolutional Neural Networks (CNNs): Introduction to CNNs; Evolution of CNN Architectures:
AlexNet, ZFNet, VGG.
Visualization of Kernels; Backprop-to-image/ Deconvolution Methods; Deep Dream, Hallucination,
Neural Style Transfer; CAM, Grad-CAM.
UNIT IV CNN and RNN FOR IMAGE AND VIDEO PROCESSING 9
CNNs for Recognition, Verification, Detection, Segmentation: CNNs for Recognition and
Verification (Siamese Networks, Triplet Loss, Contrastive Loss, Ranking Loss); CNNs for
Detection: Background of Object Detection, R-CNN, Fast R-CNN. CNNs for Segmentation: FCN,
SegNet.
Recurrent Neural Networks (RNNs): Review of RNNs; CNN + RNN Models for Video
Understanding: Spatio-temporal Models, Action/Activity Recognition
87
UNIT V DEEP GENERATIVE MODELS 9
Deep Generative Models: Review of (Popular) Deep Generative Models: GANs, VAEs
Variants and Applications of Generative Models in Vision: Applications: Image Editing, Inpainting,
Superresolution, 3D Object Generation, Security;
Recent Trends: Self-supervised Learning; Reinforcement Learning in Vision.
45 PERIODS
PRACTICAL EXERCISES: 30 PERIODS
1. Implementation of basic Image processing operations including Feature Representation and
Feature Extraction 2. Implementation of simple neural network
3. Study of pretrained deep neural network model for Images
4. CNN for Image classification
5. CNN for Image segmentation
6. RNN for video processing
7. Implementation of Deep Generative model for Image editing
TOTAL:75 PERIODS
COURSE OUTCOMES:
Upon successful completion of this course, students will be able to:
CO 1: Implement basic Image processing operations
CO 2: Understand the basic concept of deep learning
CO 3: Design and implement CNN and RNN and Deep generative model
CO 4: Understand the role of deep learning in computer vision applications.
CO 5: Design and implement Deep generative model
TEXT BOOKS
1. Ian Goodfellow Yoshua Bengio Aaron Courville, “Deep Learning”, MIT Press, 2017
2. Ragav Venkatesan, Baoxin Li, “Convolutional Neural Networks in Visual Computing”, CRC
Press, 2018.
REFERENCES
1. Rajalingappaa Shanmugamani ,Deep Learning for Computer Vision, Packt Publishing, 2018
2. David Forsyth, Jean Ponce, Computer Vision: A Modern Approach, 2002.
3. Modern Computer Vision with PyTorch, V.Kishore Ayyadevara, Yeshwanth Reddy, 2020 Packt
Publishing Ltd
4. Goodfellow, Y, Bengio, A. Courville, “Deep Learning”, MIT Press, 2016.
5. Richard Szeliski, Computer Vision: Algorithms and Applications, 2010.
6. Simon Prince, Computer Vision: Models, Learning, and Inference, 2012.
7.https://nptel.ac.in/
88
CO’s- PO’s & PSO’s MAPPING
PO’s PSO’s
CO’s
1 2 3 4 5 6 7 8 9 10 11 12 1 2
1 3 3 - - 1 - - - - - 2 - 1 2
2 3 1 - - - - - - - 2 2 - - -
3 3 3 2 3 3 - - - 2 - 2 1 3 3
4 3 1 3 2 - 2 1 - - 2 2 2 2 -
5 3 3 2 3 3 - - - 2 - 2 1 3 3
AVg. 3 2.2 1.4 1.6 1.4 0.4 0.2 0 0.8 0.8 2 0.8 1.8 1.6
1 - low, 2 - medium, 3 - high, ‘-' - no correlation
89