KEMBAR78
Code Iris | PDF
0% found this document useful (0 votes)
13 views2 pages

Code Iris

The document contains Python code that implements a machine learning model using the Stochastic Gradient Descent (SGD) classifier from the scikit-learn library. It trains the model on a provided dataset with both L1 and L2 penalties, evaluates the model's accuracy on a test set, and prints the results. The dataset is split into training and testing sets, and the model is trained and tested accordingly.

Uploaded by

jagadeshrao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views2 pages

Code Iris

The document contains Python code that implements a machine learning model using the Stochastic Gradient Descent (SGD) classifier from the scikit-learn library. It trains the model on a provided dataset with both L1 and L2 penalties, evaluates the model's accuracy on a test set, and prints the results. The dataset is split into training and testing sets, and the model is trained and tested accordingly.

Uploaded by

jagadeshrao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

import numpy as np

from sklearn.linear_model import SGDClassifier


from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

# Provided dataset
X = np.array([
[5.2, 2.7], [5.8, 2.7], [5.0, 2.0], [6.4, 2.9], [5.0, 3.5], [6.9, 3.1], [5.8, 2.7], [6.0, 2.2],
[4.7, 3.2], [6.3, 2.3], [6.7, 2.5], [6.3, 2.7], [6.7, 3.1], [4.9, 2.4], [5.8, 4.0], [6.4, 3.2],
[5.9, 3.2], [5.1, 3.5], [5.4, 3.9], [6.1, 2.9], [4.8, 3.4], [5.5, 3.5], [5.0, 3.2], [5.0, 3.4],
[7.0, 3.2], [5.7, 4.4], [4.8, 3.1], [5.0, 3.5], [6.2, 2.8], [6.3, 2.5], [7.9, 3.8], [5.6, 2.9],
[7.7, 2.8], [5.9, 3.0], [7.2, 3.0], [7.7, 3.8], [4.9, 2.5], [6.3, 3.3], [6.1, 3.0], [5.1, 3.4],
[6.3, 2.9], [5.6, 2.8], [5.8, 2.7], [6.3, 2.5], [4.8, 3.0], [5.0, 3.0], [6.8, 2.8], [6.5, 3.0],
[4.9, 3.0], [7.2, 3.6], [5.5, 2.6], [4.8, 3.4], [5.4, 3.0], [7.3, 2.9], [6.7, 3.3], [6.0, 2.2],
[5.1, 3.8], [4.9, 3.1], [6.7, 3.3], [4.4, 3.2], [6.5, 3.2], [6.0, 2.9], [5.5, 4.2], [5.2, 4.1],
[6.4, 2.8], [5.5, 2.4], [5.6, 3.0], [5.1, 3.3], [4.8, 3.0], [5.6, 2.5], [6.1, 2.8], [7.7, 2.6],
[5.9, 3.0], [6.6, 3.0], [4.4, 3.0], [6.0, 3.4], [6.9, 3.1], [6.7, 3.0], [6.9, 3.1], [5.1, 3.7],
[6.7, 3.0], [6.7, 3.1], [6.3, 3.3], [6.8, 3.0], [5.1, 2.5], [4.9, 3.1], [4.6, 3.2], [5.4, 3.4],
[6.4, 2.8], [6.2, 3.4], [4.6, 3.1], [6.0, 3.0], [4.6, 3.4], [5.0, 2.3], [7.6, 3.0], [5.7, 2.8],
[6.2, 2.9], [4.3, 3.0], [6.5, 3.0], [4.9, 3.1], [6.8, 3.2], [6.2, 2.2], [6.3, 2.8], [5.0, 3.4],
[6.9, 3.2], [5.1, 3.8], [5.3, 3.7], [5.7, 2.5], [6.1, 3.0], [6.6, 2.9], [5.7, 2.8], [4.7, 3.2],
[6.4, 2.7], [5.5, 2.3], [5.7, 3.0], [7.7, 3.0], [7.4, 2.8], [6.4, 3.2], [5.5, 2.4], [5.5, 2.5],
[5.1, 3.8], [7.1, 3.0], [6.5, 3.0], [5.8, 2.8], [5.8, 2.7], [5.1, 3.5], [5.0, 3.3], [4.5, 2.3],
[6.7, 3.1], [5.4, 3.9], [7.2, 3.2], [6.1, 2.8], [6.4, 3.1], [6.5, 2.8], [6.0, 2.7], [4.6, 3.6],
[5.6, 3.0], [6.1, 2.6], [5.7, 2.9], [6.3, 3.4], [5.2, 3.4], [5.6, 2.7], [5.4, 3.7], [5.8, 2.6],
[5.4, 3.4], [5.0, 3.6], [5.7, 3.8], [5.2, 3.5], [4.4, 2.9]
])

y = np.array([
1, 2, 1, 1, 0, 2, 2, 1, 0, 1, 2, 2, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0,
1, 0, 0, 0, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 2, 1, 1, 0, 0, 1, 2,
0, 2, 1, 0, 1, 2, 2, 2, 0, 0, 2, 0, 2, 1, 0, 0, 2, 1, 1, 0, 0, 1, 1, 2,
1, 1, 0, 1, 1, 1, 2, 0, 2, 1, 2, 2, 1, 0, 0, 0, 2, 2, 0, 2, 0, 1, 2, 1,
1, 0, 2, 0, 2, 1, 2, 1, 0, 2, 0, 0, 2, 1, 1, 1, 0, 2, 1, 1, 2, 2, 2, 1,
1, 0, 2, 2, 2, 1, 0, 0, 0, 2, 0, 2, 1, 2, 1, 1, 0, 1, 2, 1, 2, 0, 1, 0,
1, 0, 0, 0, 0, 0
])

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Train with L1 penalty


clf_l1 = SGDClassifier(loss='perceptron', penalty='l1', alpha=0.0001, max_iter=1000, tol=1e-3,
random_state=42)
clf_l1.fit(X_train, y_train)
y_pred_l1 = clf_l1.predict(X_test)
accuracy_l1 = accuracy_score(y_test, y_pred_l1)
# Train with L2 penalty
clf_l2 = SGDClassifier(loss='perceptron', penalty='l2', alpha=0.0001, max_iter=1000, tol=1e-3,
random_state=42)
clf_l2.fit(X_train, y_train)
y_pred_l2 = clf_l2.predict(X_test)
accuracy_l2 = accuracy_score(y_test, y_pred_l2)

print(f"Accuracy with L1 penalty: {accuracy_l1}")


print(f"Accuracy with L2 penalty: {accuracy_l2}")

You might also like