EX.No: 1 1.
Implementation of basic image processing operations including Feature
Representation and Feature Extraction
Aim:
The aim of this program is to perform basic image processing operations, including Feature
Representation and Feature Extraction. Specifically, the program will:
1. Detect edges in an image using the Canny edge detector.
2. Extract corners using Harris Corner detection as features of the image.
3. Display the results, including the original image, edge-detected image, and image with
extracted features.
Algorithm:
1. Load Image: Load the input image from the file system.
2. Convert to Grayscale: Convert the image to grayscale to simplify processing, as color
information isn't necessary for edge detection and corner detection.
3. Edge Detection: Apply the Canny edge detection algorithm to highlight the boundaries
(edges) in the image.
4. Feature Extraction (Comers): Use Harris Corner Detection to identify key points (corners)
in the image. These are points where there is a significant change in intensity.
5. Display Results: Show the original image, edge-detected image, and the image with
corners highlighted.
6. Save Results (Optional): Save the images of the edges and features as output files.
program
import cv2
import numpy as np
import matplotlib.pyplot as plt
#Step 1: Load the image
image =cv2.imread('image.jpg') #Replace 'image.jpg' with your image path
#Step 2: Convert the image to grayscale
gray_image =cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
#Step 3: Apply edge detection (using Canny)
edges = cv2.Canny(gray_image, threshold1=100, threshold2=200)
#Convert image to float32 for Harris detection
gray_float =np.float32(gray_image)
dst = cv2.cornerHarris(gray_float, 2, 3, 0.04)
#Dilate to mark the corners
dst = cv2.dilate(dst, None)
#Step 5: Mark the corners in the original image
image_with_cormers = image.copy()
image_with_corners[dst >0.01 *dst.max()] =[0, 0, 255] #Red c
#Step 6: Display the results
#Original image with comers marked
plt.subplot(1, 2, 1)
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.title("Original Image")
plt.axis('off')
#Show corners on the image
plt.figure()
plt.imshow(cv2.cvtColor(image_with_corners, cv2.COLOR_BGR2RGB))
plt.title("Feature Extraction (Comers)")
plt.axis('off')
plt.show()
#Step 7: Save the results (optional)
cv2.imwrite('edges_output.jpg', edges)
cv2.imwrite('corners_output.jpg', image_with_comers)
Result:
focuses on performing basic image processing tasks such as edge detection (using the Canny
edge detector) and feature extraction (using Harris Comer detection), followed by displaying
the results with highlighted features in the images.
EX.No: 2 2. Implementation of simple neural network
Aim:
The aim of this task is to implement a simple neural network using Python. The neural
network will be designed to classify data based on a simple dataset (like the Iris dataset or a
basic binary classification problem). This example will use Keras (a high-level neural
network API) and TensorFlow as the backend for creating the neural network.
Algorithm:
1. Import Required Libraries: Import libraries like TensorFlow, Keras, and other necessary
modules for neural network creation.
2. Load and Preprocess Data: Load a dataset for classification, and preprocess it
(normalize, split into training and testing sets).
3. Build Neural Network Model:Define the architecture of the neural network (input layer,
hidden layers, output layer).
Use activation functions like ReLU for hidden layers and softmax or sigmoid for the output
layer depending on the problem (multi-class or binary).
4. Compile the Model: Choose a loss function and optimizer (e.g., categorical_crossentropy
for multi-class classification or binary_crossentropy for binary classification).
5. Train the Model: Use the training data to train the neural network.
6. Evaluate the Model: Test the trained model on unseen test data to measure its performance.
7. Output Results: Display accuracy and loss metrics.
Program:
Import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
from skleam.preprocessing import LabelEncoder
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
Import tensorflow as tf
#Step 1: Load the Iris dataset
iris = datasets.load_iris()
X =iris.data #Features
y =iris.target #Labels
#Step 2: Preprocess the data
#Convert labels to one-hot encoding
y_encoded = to_categorical(y, num_classes=3)
#Split data into training and testing sets
X_train, X_test, y_train, y_test =train_test_split(X, y_encoded, test_size=0.2,
random_state=42)
#Step 3: Build the Neural Network Model
model = Sequential()
#Input layer and first hidden layer with 10 neurons and ReLU activation
model.a (Dense(10, Input_aim=4, activation=relu))
#Second hidden layer with 8 neurons and ReLU activation
model.add(Dense(8, activation=relu'))
#Output layer with 3 neurons (one for each class) and softmax activation
model.add(Dense(3, activation=softmax'))
#Step 4: Compile the model
model.compile(loss=categorical_crossentropy', optimizer adam', metrics=['accuracy'])
#Step 5: Train the model
model.fit(X_train, y_train, epochs=100, batch_size=5, verbose=1)
#Step 6: Evaluate the model
loss, accuracy =model.evaluate(X_test, y_test)
#Output results
print(f"Test Loss: loss:.4f}")
print(f"Test Accuracy: {accuracy:.4f}")
Result:
1. Training Progress: During the training, you will see the loss and accuracy for each epoch.
As training progresses, the model improves, and the loss decreases while accuracy increases.
2. Test Accuracy: After training, the model's accuracy on the test set is displayed. The
higher the accuracy, the better the model has learned to classify the data. For this example,
we may see an accuracy of around 96.67% on the test set, which indicates that the model is
performing well.
3. Loss: The test loss is also reported, showing how far off the model's predictions were from
the actual labels on the test data. A lower test loss indicates a better-performing model.