1.
Calculator
Overview: A simple calculator that performs basic arithmetic operations.
Problem Statement: Develop a Python program that takes two numbers and an
operator as input and performs the corresponding operation.
Learning Outcomes:
● Taking user input
● Using conditional statements
● Performing arithmetic operations
Code:
num1 = float(input("Enter first number: "))
num2 = float(input("Enter second number: "))
operator = input("Enter operator (+, -, *, /): ")
if operator == '+':
print(f"Result: {num1 + num2}")
elif operator == '-':
print(f"Result: {num1 - num2}")
elif operator == '*':
print(f"Result: {num1 * num2}")
elif operator == '/':
print(f"Result: {num1 / num2}")
else:
print("Invalid operator")
What is used in the code:
● Variables
● Input function
● Conditional statements (if-elif-else)
● Arithmetic operations
2. Guess the Number Game
Overview: A game where the user has to guess a randomly generated number.
Problem Statement: Create a Python program that generates a random number
and allows the user to guess until they get it right.
Learning Outcomes:
● Using loops
● Generating random numbers
● Handling user input
Code:
import random
number = random.randint(1, 100)
guess = None
while guess != number:
guess = int(input("Guess the number (1-100): "))
if guess < number:
print("Too low!")
elif guess > number:
print("Too high!")
else:
print("Congratulations! You guessed it right.")
What is used in the code:
● Random module
● Loops
● Conditional statements
● User input
3. To-Do List App
Overview: A simple text-based to-do list.
Problem Statement: Develop a Python program that allows users to add, remove,
and view tasks in a to-do list.
Learning Outcomes:
● Using lists
● Implementing loops
● Taking user input dynamically
Code:
todo_list = []
while True:
print("1. Add Task")
print("2. Remove Task")
print("3. View Tasks")
print("4. Exit")
choice = input("Enter your choice: ")
if choice == '1':
task = input("Enter task: ")
todo_list.append(task)
elif choice == '2':
task = input("Enter task to remove: ")
if task in todo_list:
todo_list.remove(task)
elif choice == '3':
print("To-Do List:", todo_list)
elif choice == '4':
break
else:
print("Invalid choice")
What is used in the code:
● Lists
● Loops
● Conditional statements
4: Dice Roller
Overview
This project simulates rolling dice. The user specifies the number of dice and sides, and the
program generates random results.
Problem Statement
A user wants to roll a die but does not have one physically. The program should allow rolling
multiple dice with different numbers of sides.
Learning Outcomes
● Using the random module
● Implementing user input handling
● Looping and conditionals
Code
import random
def roll_dice(num_dice, num_sides):
return [random.randint(1, num_sides) for _ in range(num_dice)]
def main():
print(" 🎲 Dice Roller 🎲")
num_dice = int(input("Enter the number of dice: "))
num_sides = int(input("Enter the number of sides per die: "))
if num_dice <= 0 or num_sides <= 0:
print("Invalid input. Please enter positive numbers.")
return
results = roll_dice(num_dice, num_sides)
print(f"Results: {results}")
if __name__ == "__main__":
main()
5: Password Generator
Overview
This project generates a secure random password based on user preferences. The user can
choose the password length and whether to include special characters.
Problem Statement
Creating strong passwords is essential for security. Users struggle to create and remember
complex passwords. This program generates strong passwords on demand.
Learning Outcomes
● Working with the random and string modules
● Using loops and conditionals
● Handling user input
Code
import random
import string
def generate_password(length, use_special_chars):
characters = string.ascii_letters + string.digits
if use_special_chars:
characters += string.punctuation
password = ''.join(random.choice(characters) for _ in
range(length))
return password
def main():
print(" 🔒 Password Generator 🔒")
length = int(input("Enter the password length: "))
use_special_chars = input("Include special characters? (y/n):
").lower() == 'y'
if length <= 0:
print("Invalid length. Please enter a positive number.")
return
password = generate_password(length, use_special_chars)
print(f"Generated Password: {password}")
if __name__ == "__main__":
main()
6: Contact Book
Overview
A simple contact book where users can add, view, search, update, and delete contacts.
Contacts are stored in a file for persistence.
Problem Statement
Managing contacts manually is inefficient. This program provides an easy way to store and
retrieve contact information.
Learning Outcomes
● File handling (read, write, append)
● Dictionaries and lists
● Handling user input and displaying menu-based options
Code
import json
CONTACTS_FILE = "contacts.json"
def load_contacts():
try:
with open(CONTACTS_FILE, "r") as file:
return json.load(file)
except (FileNotFoundError, json.JSONDecodeError):
return {}
def save_contacts(contacts):
with open(CONTACTS_FILE, "w") as file:
json.dump(contacts, file, indent=4)
def add_contact():
name = input("Enter name: ")
phone = input("Enter phone number: ")
email = input("Enter email: ")
contacts = load_contacts()
contacts[name] = {"phone": phone, "email": email}
save_contacts(contacts)
print("Contact added successfully!")
def view_contacts():
contacts = load_contacts()
if not contacts:
print("No contacts found.")
return
for name, details in contacts.items():
print(f"\nName: {name}\nPhone: {details['phone']}\nEmail:
{details['email']}")
def search_contact():
name = input("Enter name to search: ")
contacts = load_contacts()
if name in contacts:
print(f"\nName: {name}\nPhone:
{contacts[name]['phone']}\nEmail: {contacts[name]['email']}")
else:
print("Contact not found.")
def delete_contact():
name = input("Enter name to delete: ")
contacts = load_contacts()
if name in contacts:
del contacts[name]
save_contacts(contacts)
print("Contact deleted successfully!")
else:
print("Contact not found.")
def main():
while True:
print("\n 📞 Contact Book 📞")
print("1. Add Contact")
print("2. View Contacts")
print("3. Search Contact")
print("4. Delete Contact")
print("5. Exit")
choice = input("Choose an option: ")
if choice == "1":
add_contact()
elif choice == "2":
view_contacts()
elif choice == "3":
search_contact()
elif choice == "4":
delete_contact()
elif choice == "5":
print("Goodbye!")
break
else:
print("Invalid choice. Please try again.")
if __name__ == "__main__":
main()
7. Rock, Paper, Scissors Game
Overview
This project simulates the classic Rock, Paper, Scissors game between the user and
the computer. The program determines the winner based on the game rules.
Problem Statement
The user wants to play Rock, Paper, Scissors against a computer opponent. The
program should randomly select a choice for the computer and determine the winner.
Learning Outcomes
● Using the random module
● Handling user input
● Implementing conditional statements
Code
import random
def get_computer_choice():
return random.choice(["rock", "paper", "scissors"])
def determine_winner(user, computer):
if user == computer:
return "It's a tie!"
elif (user == "rock" and computer == "scissors") or \
(user == "scissors" and computer == "paper") or \
(user == "paper" and computer == "rock"):
return "You win! 🎉"
else:
return "Computer wins! 😢"
def main():
print(" 🎮 Rock, Paper, Scissors Game 🎮")
user_choice = input("Enter rock, paper, or scissors:
").lower()
if user_choice not in ["rock", "paper", "scissors"]:
print("Invalid choice. Please try again.")
return
computer_choice = get_computer_choice()
print(f"Computer chose: {computer_choice}")
print(determine_winner(user_choice, computer_choice))
if __name__ == "__main__":
main()
8. Simple Chatbot
Overview
This is a rule-based chatbot that responds to basic user inputs. It provides answers to
common questions and a default response for unknown inputs.
Problem Statement
Users want a simple chatbot to interact with and get automated responses to common
questions.
Learning Outcomes
● Using dictionaries for mapping responses
● Handling string comparisons
● Implementing a simple loop for interaction
Code
def chatbot_response(user_input):
responses = {
"hi": "Hello! How can I assist you today?",
"how are you": "I'm just a bot, but I'm doing great!
😊",
"what is your name": "I'm a chatbot created to help
you.",
"bye": "Goodbye! Have a great day! 👋"
}
return responses.get(user_input.lower(), "I'm not sure how
to respond to that.")
def main():
print(" 🤖 Simple Chatbot 🤖 (Type 'bye' to exit)")
while True:
user_input = input("You: ").strip()
if user_input.lower() == "bye":
print("Chatbot: Goodbye! Have a nice day! 👋")
break
print(f"Chatbot: {chatbot_response(user_input)}")
if __name__ == "__main__":
main()
9. Weather App
Overview
This project fetches real-time weather data using the OpenWeatherMap API and
displays temperature, humidity, and weather conditions.
Problem Statement
Users want a way to check real-time weather conditions for any location. The program
should retrieve and display weather data in a user-friendly format.
Learning Outcomes
● Making API requests using requests
● Handling JSON responses
● Processing user input for location search
Code
Note: You need an OpenWeatherMap API key to run this program. Get it
from OpenWeather.
import requests
API_KEY = "your_api_key_here"
BASE_URL = "https://api.openweathermap.org/data/2.5/weather"
def get_weather(city):
params = {"q": city, "appid": API_KEY, "units": "metric"}
response = requests.get(BASE_URL, params=params)
if response.status_code == 200:
data = response.json()
weather_desc = data["weather"][0]["description"]
temp = data["main"]["temp"]
humidity = data["main"]["humidity"]
return f" 🌤️ {city.title()}: {weather_desc}, Temp:
{temp}°C, Humidity: {humidity}%"
else:
return "City not found. Please try again."
def main():
print(" 🌍 Weather App 🌍")
city = input("Enter city name: ")
print(get_weather(city))
if __name__ == "__main__":
main()
10. File Organizer
Overview
This script organizes files in a folder based on file types, grouping them into separate
folders (e.g., PDFs, images, documents).
Problem Statement
Users often have cluttered folders with different types of files mixed together. The
program automatically sorts files into categorized folders.
Learning Outcomes
● Working with the os and shutil modules
● Organizing files based on extensions
● Automating repetitive tasks
Code
import os
import shutil
def organize_files(directory):
if not os.path.exists(directory):
print("Directory not found!")
return
file_types = {
"Images": [".jpg", ".jpeg", ".png", ".gif"],
"Documents": [".pdf", ".docx", ".txt", ".xlsx"],
"Videos": [".mp4", ".avi", ".mov"],
"Music": [".mp3", ".wav"],
"Others": []
}
for file in os.listdir(directory):
file_path = os.path.join(directory, file)
if os.path.isfile(file_path):
_, ext = os.path.splitext(file)
folder = "Others"
for category, extensions in file_types.items():
if ext.lower() in extensions:
folder = category
break
new_folder_path = os.path.join(directory, folder)
os.makedirs(new_folder_path, exist_ok=True)
shutil.move(file_path, os.path.join(new_folder_path,
file))
print("Files have been organized!")
def main():
folder_path = input("Enter the folder path to organize: ")
organize_files(folder_path)
if __name__ == "__main__":
main()
11: Spam Email Detection
Overview:
This project focuses on classifying emails as spam or not spam using Natural Language
Processing (NLP) techniques and a classification model like Naïve Bayes or Logistic
Regression.
Problem Statement:
Spam emails clutter inboxes and may carry security risks. A machine learning-based model can
help filter spam emails automatically.
Learning Outcomes:
● Understanding text preprocessing techniques
● Implementing TF-IDF and Bag-of-Words models
● Applying classification algorithms like Naïve Bayes
Code:
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, classification_report
# Load dataset
data = pd.read_csv('spam.csv', encoding='latin-1')
data = data[['v1', 'v2']]
data.columns = ['label', 'message']
# Convert labels to binary
data['label'] = data['label'].map({'ham': 0, 'spam': 1})
# Feature extraction
vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(data['message'])
y = data['label']
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2, random_state=42)
# Train model
model = MultinomialNB()
model.fit(X_train, y_train)
# Predict
y_pred = model.predict(X_test)
# Evaluate
print(f"Accuracy: {accuracy_score(y_test, y_pred)}")
print(classification_report(y_test, y_pred))
12: Handwritten Digit Recognition
Overview:
This project uses a Convolutional Neural Network (CNN) to classify handwritten digits (0-9)
using the MNIST dataset.
Problem Statement:
Handwritten digit recognition is important for applications like automated check processing, form
digitization, and postal sorting. A CNN model can accurately classify handwritten digits.
Learning Outcomes:
● Understanding deep learning and CNN architecture
● Implementing data augmentation for better performance
● Training and evaluating deep learning models using TensorFlow/Keras
Code:
import tensorflow as tf
from tensorflow.keras import layers, models
import matplotlib.pyplot as plt
from tensorflow.keras.datasets import mnist
# Load dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Normalize data
X_train, X_test = X_train / 255.0, X_test / 255.0
# Reshape for CNN
X_train = X_train.reshape(-1, 28, 28, 1)
X_test = X_test.reshape(-1, 28, 28, 1)
# Define model
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28,
1)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])
# Compile model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Train model
model.fit(X_train, y_train, epochs=5, validation_data=(X_test,
y_test))
# Evaluate model
test_loss, test_acc = model.evaluate(X_test, y_test)
print(f"Test accuracy: {test_acc}")
13: Image Identification (Cats vs Dogs)
Overview:
This project focuses on classifying images of cats and dogs using a Convolutional Neural
Network (CNN). The model learns to differentiate between the two classes based on image
features like edges, textures, and shapes.
Problem Statement:
Identifying whether an image contains a cat or a dog is a fundamental computer vision task.
Traditional algorithms struggle with complex patterns in images, making deep learning a more
effective solution.
Learning Outcomes:
● Understanding CNN architecture (convolutional layers, pooling, fully connected layers)
● Preprocessing and augmenting image datasets
● Training and fine-tuning CNN models using TensorFlow/Keras
Code:
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Define directories for training and validation images
train_dir = 'dataset/train'
val_dir = 'dataset/validation'
# Data Augmentation & Preprocessing
train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2,
zoom_range=0.2, horizontal_flip=True)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir,
target_size=(150, 150), batch_size=32, class_mode='binary')
val_generator = val_datagen.flow_from_directory(val_dir,
target_size=(150, 150), batch_size=32, class_mode='binary')
# Build CNN Model
model = models.Sequential([
layers.Conv2D(32, (3,3), activation='relu',
input_shape=(150,150,3)),
layers.MaxPooling2D(2,2),
layers.Conv2D(64, (3,3), activation='relu'),
layers.MaxPooling2D(2,2),
layers.Conv2D(128, (3,3), activation='relu'),
layers.MaxPooling2D(2,2),
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.Dense(1, activation='sigmoid') # Binary Classification
])
# Compile model
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
# Train model
model.fit(train_generator, validation_data=val_generator, epochs=10)
# Save model
model.save("cats_vs_dogs_classifier.h5")
14: Text Generation (Poem Generator)
Overview:
This project builds an LSTM (Long Short-Term Memory) based text generator that learns from
existing poetry and generates new lines of text.
Problem Statement:
Writing creative text like poetry requires an understanding of context and sequential patterns.
Traditional models fail to maintain coherence in text generation, but LSTMs can effectively
capture long-range dependencies in language.
Learning Outcomes:
● Understanding Recurrent Neural Networks (RNNs) and LSTMs
● Tokenizing and preparing text for deep learning models
● Training an LSTM model for sequence generation
Code:
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Embedding, Dense
# Load dataset
text = open("poems.txt", "r").read().lower()
# Tokenize text
tokenizer = Tokenizer()
tokenizer.fit_on_texts([text])
total_words = len(tokenizer.word_index) + 1
# Create input sequences
input_sequences = []
for line in text.split("\n"):
tokens = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(tokens)):
input_sequences.append(tokens[:i+1])
# Pad sequences
max_length = max([len(seq) for seq in input_sequences])
input_sequences = pad_sequences(input_sequences, maxlen=max_length,
padding='pre')
# Split data
X, y = input_sequences[:, :-1], input_sequences[:, -1]
y = tf.keras.utils.to_categorical(y, num_classes=total_words)
# Build LSTM Model
model = Sequential([
Embedding(total_words, 100, input_length=max_length-1),
LSTM(150, return_sequences=True),
LSTM(100),
Dense(total_words, activation='softmax')
])
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
# Train model
model.fit(X, y, epochs=50, verbose=1)
# Save model
model.save("poem_generator.h5")
15: Face Mask Detection
Overview:
This project builds a CNN-based face mask detector that classifies images as "Mask" or "No
Mask" using deep learning.
Problem Statement:
Due to the COVID-19 pandemic, wearing masks has become crucial in public places. Manually
monitoring people for mask compliance is inefficient. A deep learning-based model can
automatically detect whether a person is wearing a mask or not.
Learning Outcomes:
● Implementing transfer learning for better accuracy
● Using OpenCV for real-time detection
● Training and fine-tuning deep learning models
Code:
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.layers import Dense, Flatten, Dropout
from tensorflow.keras.models import Model
import cv2
import numpy as np
# Data preprocessing
train_dir = 'mask_dataset/train'
val_dir = 'mask_dataset/validation'
train_datagen = ImageDataGenerator(rescale=1./255, rotation_range=20,
zoom_range=0.2, horizontal_flip=True)
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(train_dir,
target_size=(224, 224), batch_size=32, class_mode='binary')
val_generator = val_datagen.flow_from_directory(val_dir,
target_size=(224, 224), batch_size=32, class_mode='binary')
# Load pre-trained MobileNetV2 model
base_model = MobileNetV2(weights="imagenet", include_top=False,
input_shape=(224, 224, 3))
# Freeze base model layers
for layer in base_model.layers:
layer.trainable = False
# Add custom layers
x = Flatten()(base_model.output)
x = Dense(128, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(1, activation='sigmoid')(x)
# Compile model
model = Model(inputs=base_model.input, outputs=x)
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])
# Train model
model.fit(train_generator, validation_data=val_generator, epochs=10)
# Save model
model.save("mask_detector.h5")
# Load model for real-time detection
model = tf.keras.models.load_model("mask_detector.h5")
# Real-time detection using OpenCV
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
face = cv2.resize(frame, (224, 224))
face = np.expand_dims(face, axis=0) / 255.0
prediction = model.predict(face)
label = "Mask" if prediction < 0.5 else "No Mask"
color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
cv2.putText(frame, label, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1,
color, 2)
cv2.imshow("Mask Detector", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
16: AI-Powered Story Generator (GPT-based)
Overview:
This project uses GPT (Generative Pre-trained Transformer) models to generate creative
stories based on user prompts. The model learns from vast datasets to understand language
patterns and generate human-like text.
Problem Statement:
Manually writing engaging stories is time-consuming. AI can assist writers by generating
coherent, creative, and unique story ideas, reducing the effort required for brainstorming.
Learning Outcomes:
● Understanding Transformer-based models like GPT
● Implementing OpenAI's API for text generation
● Fine-tuning AI-generated text for better coherence
Code:
import openai
# Set up API key (replace with your OpenAI key)
openai.api_key = "your-api-key-here"
def generate_story(prompt, max_length=200):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "system", "content": "You are a creative
storyteller."},
{"role": "user", "content": prompt}]
)
return response["choices"][0]["message"]["content"]
# Example usage
prompt = "Once upon a time in a futuristic city, a young detective
discovered a hidden AI controlling the world..."
story = generate_story(prompt)
print(story)
17: AI-Powered Image Generation (Stable Diffusion)
Overview:
This project generates high-quality AI-generated images based on text prompts using Stable
Diffusion, a powerful generative AI model for image synthesis.
Problem Statement:
Creating unique images for digital art, marketing, or storytelling requires expertise. AI can help
artists and content creators generate images based on descriptions, saving time and effort.
Learning Outcomes:
● Understanding Diffusion models for image generation
● Using diffusers and transformers libraries
● Generating high-quality AI images from textual descriptions
Code:
from diffusers import StableDiffusionPipeline
import torch
# Load Stable Diffusion model
pipe =
StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-
5")
pipe.to("cuda") # Use GPU for faster generation
# Generate image from text prompt
prompt = "A futuristic cityscape with flying cars at sunset"
image = pipe(prompt).images[0]
# Save and display the image
image.save("generated_image.png")
image.show()
18: AI-Based Code Generator (Codex/GPT-4 Turbo)
Overview:
This project uses OpenAI's Codex (GPT-based model) to generate code from natural
language descriptions, assisting developers in writing Python, JavaScript, and other
programming languages.
Problem Statement:
Writing complex code requires knowledge and experience. AI-powered code generation can
assist developers by automatically generating code snippets based on simple instructions.
Learning Outcomes:
● Using AI models for automatic code generation
● Understanding prompt engineering for coding tasks
● Integrating AI-generated code into real-world applications
Code:
import openai
# Set up API key (replace with your OpenAI key)
openai.api_key = "your-api-key-here"
def generate_code(prompt, language="Python"):
response = openai.ChatCompletion.create(
model="gpt-4-turbo",
messages=[{"role": "system", "content": f"You are an expert
{language} programmer."},
{"role": "user", "content": prompt}]
)
return response["choices"][0]["message"]["content"]
# Example usage
prompt = "Write a Python function that sorts a list using the
QuickSort algorithm."
code = generate_code(prompt)
print(code)
19. Sentiment Analysis of Social Media Posts
Overview:
In this project, students will build a sentiment analysis model that
classifies social media posts as positive, negative, or neutral.
Problem Statement:
With the growing volume of online content, it is challenging to
manually assess the sentiment of each post. This project will automate
sentiment classification to help analyze public opinions on various
topics.
Learning Outcomes:
● Learn how to preprocess text data (tokenization, stopword
removal).
● Understand sentiment analysis and its applications.
● Implement machine learning algorithms for text classification.
Code Example:
from textblob import TextBlob
def analyze_sentiment(post):
analysis = TextBlob(post)
if analysis.sentiment.polarity > 0:
return "Positive"
elif analysis.sentiment.polarity == 0:
return "Neutral"
else:
return "Negative"
# Test with sample social media posts
post = "I love this new phone! It's amazing!"
print(analyze_sentiment(post))
20. Image Caption Generator
Overview:
This project involves building an image caption generator using deep
learning techniques, where the model generates a descriptive caption
for any given image.
Problem Statement:
Currently, visually impaired people struggle to understand images. An
image caption generator can assist them by providing textual
descriptions of what’s in an image.
Learning Outcomes:
● Understand Convolutional Neural Networks (CNNs) for image
processing.
● Learn how to combine CNN with Recurrent Neural Networks (RNNs)
for sequence generation.
● Get hands-on experience in deep learning with Keras or
TensorFlow.
Code Example:
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.vgg16 import VGG16, preprocess_input
from keras.models import Model
import numpy as np
# Load pre-trained VGG16 model
base_model = VGG16(weights='imagenet')
model = Model(inputs=base_model.input,
outputs=base_model.get_layer('fc2').output)
# Function to extract features from an image
def extract_features(image_path):
image = load_img(image_path, target_size=(224, 224))
image = img_to_array(image)
image = np.expand_dims(image, axis=0)
image = preprocess_input(image)
features = model.predict(image)
return features
# Example: Extract features from an image
features = extract_features('example_image.jpg')
print(features)