Deep Learning Lab
Deep Learning Lab
1
Experiment 1 Solution to XOR problem using DNN
Date
AIM:
To solve XOR problem using DNN
STEPS:
PROGRAM
import tensorflow as tf import numpy as np
# Define the XOR input and output data
x_data = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=np.float32)
y_data = np.array([[0], [1], [1], [0]], dtype=np.float32)
# Define the DNN architecture model = tf.keras.Sequential([
tf.keras.layers.Dense(8, input_dim=2, activation=’relu’),
tf.keras.layers.Dense(8, activation=’relu’),
tf.keras.layers.Dense(1, activation=’sigmoid’) ])
# Compile the model
model.compile(optimizer=’adam’, loss=’binary_crossentropy’,
metrics=[’accuracy’])
# Train the DNN
model.fit(x_data, y_data, epochs=1000, verbose=0)
# Test the trained DNN predictions = model.predict(x_data)
rounded_predictions = np.round(predictions) print("Predictions:",
rounded_predictions)
OUTPUT
Predictions: [[0.]
[1.]
[1.]
[0.]]
RESULT
Thus XOR problem using DNN is solved.
2
Experiment 2 Character recognition using CNN
Date :
AIM:
To implement character Recognition using CNN
STEPS
PROGRAM
import tensorflow as
tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten,
Dense
# Load the MNIST dataset
(x_train, y_train), (x_test, y_test) =
mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 28, 28, 1).astype(’float32’) /
255.0 x_test = x_test.reshape(-1, 28, 28, 1).astype(’float32’) /
255.0
# Define the CNN architecture
model = Sequential()
model.add(Conv2D(32, (3, 3), activation=’relu’, input_shape=(28, 28,
1))) model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation=’relu’))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(64, activation=’relu’))
model.add(Dense(10, activation=’softmax’))
3
model.compile(optimizer=’adam’, loss=’sparse_categorical_crossentropy’,
metrics=[’accuracy’])
OUTPUT
Epoch
1/5
938/938 [==============================] - 21s 21ms/step - loss: 0.1687 - accuracy:
0.
9493 - val_loss: 0.0792 - val_accuracy: 0.9755
Epoch 2/5
938/938 [==============================] - 19s 21ms/step - loss: 0.0511 - accuracy:
0.
9847 - val_loss: 0.0426 - val_accuracy: 0.9855
Epoch 3/5
938/938 [==============================] - 20s 21ms/step - loss: 0.0365 - accuracy:
0.
9884 - val_loss: 0.0308 - val_accuracy: 0.9900
Epoch 4/5
938/938 [==============================] - 20s 21ms/step - loss: 0.0274 - accuracy:
0.
9915 - val_loss: 0.0319 - val_accuracy: 0.9889
Epoch 5/5
938/938 [==============================] - 20s 21ms/step - loss: 0.0230 - accuracy:
0.
9927 - val_loss: 0.0353 - val_accuracy: 0.9901
313/313 [==============================] - 1s 4ms/step - loss: 0.0353 - accuracy:
0.99 01
Test Loss: 0.03527578338980675
Test Accuracy: 0.9901000261306763
4
RESULT
Thus character Recognition using CNN is implemented.
Experiment 3 Face recognition using CNN
Date
AIM:
To Implement Face recognition using CNN
STEPS
1 Set the path to the directory containing the face images
2 Load the face images and labels & Iterate over the face image directory and
load the images
3 Convert the data to numpy arrays, Preprocess labels to extract numeric part And
Convert labels to one-hot encoded vectors
4 Split the data into training and validation sets
5 Compile, Train the CNN model and Save the trained model
PROGRAM import os
import numpy as np
import tensorflow as
tf
from tensorflow.keras.preprocessing.image import load_img,
img_to_array from sklearn.model_selection import train_test_split
# Set the path to the directory containing the face images
faces_dir = "D:/R2021 DL LAB/Faces/Faces"
OUTPUT
Epoch 1/10
65/65 [==============================] - 5s 68ms/step - loss: 215.6209 - accuracy:
0.0
098 - val_loss: 4.7830 - val_accuracy: 0.0039
Epoch 2/10
65/65 [==============================] - 4s 66ms/step - loss: 4.7793 - accuracy:
0.011
2 - val_loss: 4.7757 - val_accuracy: 0.0039
6
Epoch 3/10
65/65 [==============================] - 4s 66ms/step - loss: 4.7717 - accuracy:
0.012
RESULT
Face recognition using CNN is analysed and implemented.
7
Experiment 4 Language modeling using RNN
Date
AIM:
To implement Language modeling using RNN
STEPS
PROGRAM
import numpy as np
import tensorflow as
tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN,
Dense
# Sample text data
text = "This is a sample text for language modeling using
RNN."
# Create a set of unique characters in the text
chars = sorted(set(text))
char_to_index = {char: index for index, char in
enumerate(chars)} index_to_char = {index: char for index, char
in enumerate(chars)}
# Convert text to a sequence of character indices
text_indices = [char_to_index[char] for char in
text]
# Create input-output pairs for training seq_length
= 20 sequences = [] next_char = [] for i in
range(0, len(text_indices) - seq_length):
sequences.append(text_indices[i : i + seq_length])
next_char.append(text_indices[i + seq_length])
OUTPUT
Epoch 1/50
1/1 [==============================] - 1s 1s/step - loss: 3.0885
Epoch 2/50
1/1 [==============================] - 0s 8ms/step - loss: 3.0053
Epoch 3/50
1/1 [==============================] - 0s 14ms/step - loss: 2.9234
Epoch 4/50
1/1 [==============================] - 0s 0s/step - loss: 2.8392
Epoch 5/50
1/1 [==============================] - 0s 17ms/step - loss: 2.7501
Epoch 6/50
1/1 [==============================] - 0s 0s/step - loss: 2.6545
9
Epoch 7/50
1/1 [==============================] - 0s 4ms/step - loss: 2.5519
Epoch 8/50
1/1 [==============================] - 0s 14ms/step - loss: 2.4425
Epoch 9/50
1/1 [==============================] - 0s 0s/step - loss: 2.3266
Epoch 10/50
1/1 [==============================] - 0s 18ms/step - loss: 2.2063
Epoch 11/50
1/1 [==============================] - 0s 8ms/step - loss: 2.0865
Epoch 12/50
1/1 [==============================] - 0s 5ms/step - loss: 1.9717
Epoch 13/50
1/1 [==============================] - 0s 0s/step - loss: 1.8622
Epoch 14/50
1/1 [==============================] - 0s 4ms/step - loss: 1.7552
Epoch 15/50
1/1 [==============================] - 0s 13ms/step - loss: 1.6493
Epoch 16/50
1/1 [==============================] - 0s 0s/step - loss: 1.5457
Epoch 17/50
1/1 [==============================] - 0s 17ms/step - loss: 1.4472
Epoch 18/50
1/1 [==============================] - 0s 0s/step - loss: 1.3554
Epoch 19/50
1/1 [==============================] - 0s 17ms/step - loss: 1.2678
Epoch 20/50
1/1 [==============================] - 0s 0s/step - loss: 1.1810
Epoch 21/50
1/1 [==============================] - 0s 17ms/step - loss: 1.0964
Epoch 22/50
1/1 [==============================] - 0s 14ms/step - loss: 1.0179
Epoch 23/50
1/1 [==============================] - 0s 1ms/step - loss: 0.9459
Epoch 24/50
1/1 [==============================] - 0s 16ms/step - loss: 0.8773
Epoch 25/50
1/1 [==============================] - 0s 0s/step - loss: 0.8107
Epoch 26/50
1/1 [==============================] - 0s 17ms/step - loss: 0.7473
Epoch 27/50
1/1 [==============================] - 0s 0s/step - loss: 0.6884
Epoch 28/50
1/1 [==============================] - 0s 17ms/step - loss: 0.6333
Epoch 29/50
10
1/1 [==============================] - 0s 0s/step - loss: 0.5809
Epoch 30/50
1/1 [==============================] - 0s 2ms/step - loss: 0.5318
Epoch 31/50
1/1 [==============================] - 0s 17ms/step - loss: 0.4871
Epoch 32/50
1/1 [==============================] - 0s 0s/step - loss: 0.4469
Epoch 33/50
1/1 [==============================] - 0s 18ms/step - loss: 0.4099
Epoch 34/50
1/1 [==============================] - 0s 0s/step - loss: 0.3753
Epoch 35/50
1/1 [==============================] - 0s 18ms/step - loss: 0.3430
Epoch 36/50
1/1 [==============================] - 0s 0s/step - loss: 0.3134
Epoch 37/50
1/1 [==============================] - 0s 15ms/step - loss: 0.2865
Epoch 38/50
1/1 [==============================] - 0s 0s/step - loss: 0.2621
Epoch 39/50
1/1 [==============================] - 0s 2ms/step - loss: 0.2399
Epoch 40/50
1/1 [==============================] - 0s 15ms/step - loss: 0.2200
Epoch 41/50
1/1 [==============================] - 0s 1ms/step - loss: 0.2021
Epoch 42/50
1/1 [==============================] - 0s 18ms/step - loss: 0.1860
Epoch 43/50
1/1 [==============================] - 0s 0s/step - loss: 0.1714
Epoch 44/50
1/1 [==============================] - 0s 16ms/step - loss: 0.1580
Epoch 45/50
1/1 [==============================] - 0s 0s/step - loss: 0.1460
Epoch 46/50
1/1 [==============================] - 0s 4ms/step - loss: 0.1353
Epoch 47/50
1/1 [==============================] - 0s 12ms/step - loss: 0.1257
Epoch 48/50
1/1 [==============================] - 0s 933us/step - loss:
0.1170 Epoch 49/50
1/1 [==============================] - 0s 17ms/step - loss: 0.1090
Epoch 50/50
1/1 [==============================] - 0s 0s/step - loss: 0.1017
This is a sample tentrfornlanguags modnging nsing Rgn.rginsrngangrngangnoggrng
nsingrnging ndgg nsinorng ngrngadgsinorng
11
RESULT
Thus Language modeling using RNN is implemented.
12
Experiment 5 Sentiment analysis using LSTM
Date
AIM:
To implement Sentiment analysis using LSTM
STEPS
1 Load the IMDB dataset, which consists of movie reviews labeled with positive or
negative sentiment.
2 Preprocess the data by padding sequences to a fixed length (max_review_length)
and limiting the vocabulary size to the most frequent words (num_words).
3 Build an LSTM-based model. The Embedding layer is used to map word indices to
dense vectors, the LSTM layer captures sequence dependencies, and the Dense layer
produces a binary sentiment prediction.
4 The model is compiled with binary cross-entropy loss and the Adam optimizer.
5 Train the model using the training data. Finally, we evaluate the model on the test
data and print the test accuracy.
PROGRAM import
numpy as np import
tensorflow as tf
from tensorflow.keras.datasets import imdb from
tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, LSTM, Dense
OUTPUT
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/imdb .npz
17464789/17464789 [==============================] - 7s 0us/step
Epoch 1/5
391/391 [==============================] - 286s 727ms/step - loss: 0.4991 - accuracy:
0.7626 - val_loss: 0.3712 - val_accuracy: 0.8412
Epoch 2/5
391/391 [==============================] - 296s 757ms/step - loss: 0.3381 - accuracy:
0.8587 - val_loss: 0.3609 - val_accuracy: 0.8532
Epoch 3/5391/391 [==============================] - 313s 801ms/step - loss: 0.2642 -
accuracy:
0.8945 - val_loss: 0.3168 - val_accuracy: 0.8678
Epoch 4/5 14
391/391 [==============================] - 433s 1s/step - loss: 0.2263 - accuracy:
0.9
142 - val_loss: 0.3119 - val_accuracy: 0.8738
Epoch 5/5
391/391 [==============================] - 302s 774ms/step - loss: 0.1982 - accuracy:
0.9247 - val_loss: 0.3114 - val_accuracy: 0.8745
782/782 [==============================] - 74s 95ms/step - loss: 0.3114 - accuracy:
0.
8745
Loss: 0.3113741874694824
Accuracy: 0.874520003795623
RESULT
15
Experiment 6 Parts of speech tagging using Sequence to Sequence architecture
Date:
AIM:
To implement Parts of speech tagging using Sequence to Sequence architecture
STEPS
PROGRAM
import numpy as np
import tensorflow as
tf
from tensorflow.keras.models import Model from
tensorflow.keras.layers import Input, LSTM, Dense
from tensorflow.keras.preprocessing.sequence import
pad_sequences
# Define the input and output sequences
input_texts = [’I love coding’, ’This is a pen’, ’She sings well’]
target_texts = [’PRP VB NNP’, ’DT VBZ DT NN’, ’PRP VBZ RB’]
18
# Test the model for input_text in input_texts: input_seq
= pad_sequences([[input_word2idx[word] for word in
input_text.split()]], maxlen=max_encoder_seq_length)
predicted_pos_tags = generate_pos_tags(input_seq)
print(’Input:’, input_text)
print(’Predicted POS Tags:’, predicted_pos_tags)
print()
OUTPUT
Epoch
1/50
1/1 [==============================] - 7s 7s/step - loss: 1.3736 - accuracy:
0.0000e+0
0 - val_loss: 1.1017 - val_accuracy: 0.0000e+00
Epoch 2/50
1/1 [==============================] - 0s 63ms/step - loss: 1.3470 - accuracy:
0.7500
- val_loss: 1.1068 - val_accuracy: 0.0000e+00
Epoch 3/50
1/1 [==============================] - 0s 65ms/step - loss: 1.3199 - accuracy:
0.7500
- val_loss: 1.1123 - val_accuracy: 0.0000e+00
Epoch 4/50
Epoch 44/50
1/1 [==============================] - 0s 58ms/step - loss:
0.0882 - accuracy: 0.7500
Epoch 50/50
1/1 [==============================] - 0s 60ms/step - loss: 0.0751 - accuracy:
0.7500
- val_loss: 2.2554 - val_accuracy: 0.0000e+00
Input: I love coding
Predicted POS Tags: VB NNP NNP DT DT
Input: This is a pen
Predicted POS Tags: VBZ DT NN NN DT
Input: She sings well
Predicted POS Tags: VB NNP NNP DT DT
RESULT
19
lOMoARcPSD| 46942216
AIM:
To implement Machine Translation using Encoder-Decoder model .
STEPS
PROGRAM
import numpy as np
import tensorflow as
tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Define the input and output sequences
input_texts = [’I love coding’, ’This is a pen’, ’She sings well’]
target_texts = [’Ich liebe das Coden’, ’Das ist ein Stift’, ’Sie singt gut’]
# Create a set of all unique words in the input and target sequences
input_words = set() target_words = set() for input_text, target_text
in zip(input_texts, target_texts):
input_words.update(input_text.split())
target_words.update(target_text.split())
# Add <sos> and <eos> tokens to target_words target_words.add(’<sos>’)
target_words.add(’<eos>’)
21
lOMoARcPSD| 46942216
OUTPUT
Input: This is a pen
Translated Text: ist ein Stift Coden ein
RESULT
Thus Machine Translation using Encoder-Decoder model is implemented.
22
lOMoAR cPSD| 46942216
AIM:
To implement Image augmentation using GANs
STEPS
3 Train the hyperparameters and the Training loop has the following steps:
• Generate a batch of fake images
• Train the discriminator
• Train the generator
• Print the progress and save samples
PROGRAM
import numpy as
np
import matplotlib.pyplot as plt from
23
lOMoAR cPSD| 46942216
# Combine the generator and discriminator into a single GAN model gan_input
= Input(shape=(100,))
gan_output = discriminator(generator(gan_input))
gan = Model(gan_input, gan_output)
gan.compile(loss=’binary_crossentropy’, optimizer=Adam(learning_rate=0.0002,
beta_1=0.5))
24
lOMoARcPSD| 46942216
# Training
hyperparameters epochs =
100 batch_size = 128
sample_interval = 10
25
lOMoARcPSD| 46942216
count = 0
for i in range(4):
for j in range(4):
axs[i, j].imshow(samples[count, :, :], cmap=’gray’)
axs[i, j].axis(’off’)
count += 1
plt.show()
OUTPUT
Epoch: 90 Discriminator Loss: 0.03508808836340904
Generator Loss: 1.736445483402349e-06
RESULT
Thus Image augmentation using GANs is implemented.
26