KEMBAR78
Deep Learningexp4 | PDF | Statistical Classification | Deep Learning
0% found this document useful (0 votes)
3 views4 pages

Deep Learningexp4

Uploaded by

heemaal jaglan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views4 pages

Deep Learningexp4

Uploaded by

heemaal jaglan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

DEPARTMENT OF

COMPUTER SCIENCE & ENGINEERING

Experiment 3
Student Name: Heemaal Jaglan UID:22BCS14205
Branch: BE-CSE Section/Group:903-A
Semester:6th Date of Performance:23/01/25
Subject Name: Deep Learning Lab Subject Code: 22CSP-359

1.Aim: To implement a linear classifier using Python and demonstrate its application in
binary or multi-class classification.

2.Objective:
 Understand the concept of linear classifiers and their mathematical formulation.
 Implement a linear classifier for a given dataset and evaluate its performance.

3.Input:
 Python installed with libraries: NumPy, pandas, matplotlib, scikit-learn.
 Dataset (e.g., Iris dataset, Breast Cancer dataset, or any binary/multi-class dataset).
 Python IDE or Jupyter Notebook.

4.Algorithm:
Define the Problem: Identify the dataset and the classes for classification.
Preprocess the Data: Load the dataset, clean the data, and split it into training and test sets.
Implement the Classifier:
o Use a linear model (e.g., Logistic Regression or Perceptron).
o Train the model on the training dataset.
o Predict the outcomes for the test dataset.
Evaluate the Model:
o Calculate performance metrics (e.g., accuracy, precision, recall).
o Visualize the decision boundary (for 2D datasets).
Interpret Results: Observe the classifier's performance and draw conclusions.

5.Code/Implementation: import numpy as np


import pandas as pd
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
import matplotlib.pyplot as plt
# Load the Iris dataset
iris = load_iris()
X = iris.data[:, :2] # Take the first two features for visualization
y = (iris.target != 0).astype(int) # Binary classification (class 0 vs rest)
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Train a logistic regression model
model = LogisticRegression()
model.fit(X_train, y_train)
# Predict and evaluate
y_pred = model.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
# Display results
print(f"Accuracy: {accuracy:.2f}")
print("Confusion Matrix:")
print(confusion_matrix(y_test, y_pred))
print("Classification Report:")
print(classification_report(y_test, y_pred))
# Visualize decision boundary
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, edgecolor='k', marker='o')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1])
plt.title("Decision Boundary of Logistic Regression")
plt.show()

6.Output:
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING

Implementation number 02:


import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
# Step 1: Generate Synthetic Data
X, y = make_classification(n_samples=500, n_features=4, n_informative=3, n_redundant=1,
n_classes=2, random_state=42)
# Step 2: Split the Data into Training and Testing Sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Step 3: Create and Train the Logistic Regression Model
model = LogisticRegression()
model.fit(X_train, y_train)
# Step 4: Make Predictions
y_pred = model.predict(X_test)
# Step 5: Evaluate the Model
accuracy = accuracy_score(y_test, y_pred)
conf_matrix = confusion_matrix(y_test, y_pred)
report = classification_report(y_test, y_pred)
# Display the results
print("Accuracy of the Linear Classifier:", accuracy)
print("\nConfusion Matrix:\n", conf_matrix)
print("\nClassification Report:\n", report)
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING

Output:

7.Learning outcomes:
o Learned about the machine learning models.
o Learned about the confusion matrix creation and accuracy calculation
o Learned about using various libraries in python related to machine and deep learning.
o Learned about fundamentals of image processing in deep learning.

You might also like