KEMBAR78
NLP Lab Assignment - 05 | PDF | Computer Programming | Computing
0% found this document useful (0 votes)
14 views6 pages

NLP Lab Assignment - 05

The document outlines an NLP lab assignment by Raj Kumar Reddy, focusing on implementing a text classification application and a sentiment analyzer using LSTM. It includes code snippets for both applications, demonstrating the use of TensorFlow for training models on sample datasets and predicting sentiment. The outputs show the accuracy and loss metrics over multiple training epochs.

Uploaded by

mrsivasurya838
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views6 pages

NLP Lab Assignment - 05

The document outlines an NLP lab assignment by Raj Kumar Reddy, focusing on implementing a text classification application and a sentiment analyzer using LSTM. It includes code snippets for both applications, demonstrating the use of TensorFlow for training models on sample datasets and predicting sentiment. The outputs show the accuracy and loss metrics over multiple training epochs.

Uploaded by

mrsivasurya838
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

NLP LAB ASSIGNMENT -05

NAME : Raj Kumar Reddy


REG.NO: 22BCE9821
SLOT : L16+17

1.Implement a text classification application?


CODE :

#Implement a text classification application using​



import tensorflow as tf​
from tensorflow.keras.preprocessing.text import Tokenizer​
from tensorflow.keras.preprocessing.sequence import pad_sequences​
from tensorflow.keras.models import Sequential​
from tensorflow.keras.layers import Embedding, LSTM, Dense​
import numpy as np​

sentences = [​
"This is a positive sentence.",​
"This is another negative sentence."​
]​
labels = [1, 0] ​

# Tokenize the text data​
tokenizer = Tokenizer(num_words=5000)​
tokenizer.fit_on_texts(sentences)​
sequences = tokenizer.texts_to_sequences(sentences)​

# Pad sequences to a fixed length​
max_length = max([len(seq) for seq in sequences])​
padded_sequences = pad_sequences(sequences, maxlen=max_length,
padding='post')​

# Build the RNN model​
model = Sequential()​
model.add(Embedding(5000, 128, input_length=max_length))​
model.add(LSTM(128))​
model.add(Dense(1, activation='sigmoid'))​

# Compile the model​
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])​

# Train the model​
# Convert padded_sequences and labels to NumPy arrays explicitly​
padded_sequences = np.array(padded_sequences)​
labels = np.array(labels)​
model.fit(padded_sequences, labels, epochs=10)​

# Example prediction​
new_sentence = "This is a great and positive sentence."​
new_sequence = tokenizer.texts_to_sequences([new_sentence])​
new_padded_sequence = pad_sequences(new_sequence, maxlen=max_length,
padding='post')​
prediction = model.predict(new_padded_sequence)​

if prediction > 0.5:​
print("Positive sentiment")​
else:​
print("Negative sentiment")
OUTPUT :

Epoch 1/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 3s 3s/step - accuracy:
0.5000 - loss: 0.6946​
Epoch 2/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 129ms/step - accuracy:
1.0000 - loss: 0.6886​
Epoch 3/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy:
1.0000 - loss: 0.6827​
Epoch 4/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 59ms/step - accuracy:
1.0000 - loss: 0.6764​
Epoch 5/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 58ms/step - accuracy:
1.0000 - loss: 0.6697​
Epoch 6/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy:
1.0000 - loss: 0.6624​
Epoch 7/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 65ms/step - accuracy:
1.0000 - loss: 0.6541​
Epoch 8/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 139ms/step - accuracy:
1.0000 - loss: 0.6449​
Epoch 9/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy:
1.0000 - loss: 0.6343​
Epoch 10/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy:
1.0000 - loss: 0.6223
2.Build a sentiment analyzer using LSTM.
CODE :

# Build a sentiment analyzer using LSTM taking different example​



import tensorflow as tf​
from tensorflow.keras.preprocessing.text import Tokenizer​
from tensorflow.keras.preprocessing.sequence import pad_sequences​
from tensorflow.keras.models import Sequential​
from tensorflow.keras.layers import Embedding, LSTM, Dense​
import numpy as np​


# Sample data (replace with your own dataset)​
sentences = [​
"I love this product, it's amazing!",​
"This is the worst experience ever.",​
"The movie was fantastic and entertaining.",​
"I'm really disappointed with the service.",​
"The food was delicious and the service was excellent.",​
"I would never recommend this place to anyone."​
]​
labels = [1, 0, 1, 0, 1, 0] # 1 for positive, 0 for negative​

# Tokenize the text data​
tokenizer = Tokenizer(num_words=5000)​
tokenizer.fit_on_texts(sentences)​
sequences = tokenizer.texts_to_sequences(sentences)​

# Pad sequences to a fixed length​
max_length = max([len(seq) for seq in sequences])​
padded_sequences = pad_sequences(sequences, maxlen=max_length,
padding='post')​

# Build the RNN model​
model = Sequential()​
model.add(Embedding(5000, 128, input_length=max_length))​
model.add(LSTM(128))​
model.add(Dense(1, activation='sigmoid'))​

# Compile the model​
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy'])​

# Train the model​
# Convert padded_sequences and labels to NumPy arrays explicitly​
padded_sequences = np.array(padded_sequences)​
labels = np.array(labels)​
model.fit(padded_sequences, labels, epochs=10)​

# Example prediction​
new_sentence = "The product is okay, nothing special."​
new_sequence = tokenizer.texts_to_sequences([new_sentence])​
new_padded_sequence = pad_sequences(new_sequence, maxlen=max_length,
padding='post')​
prediction = model.predict(new_padded_sequence)​

if prediction > 0.5:​
print("Positive sentiment")​
else:​
print("Negative sentiment")

OUTPUT :

Epoch 1/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 3s 3s/step - accuracy:
0.1667 - loss: 0.6941​
Epoch 2/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 68ms/step - accuracy:
0.6667 - loss: 0.6891​
Epoch 3/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 66ms/step - accuracy:
0.6667 - loss: 0.6841​
Epoch 4/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 63ms/step - accuracy:
0.8333 - loss: 0.6785​
Epoch 5/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 62ms/step - accuracy:
0.8333 - loss: 0.6718​
Epoch 6/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 69ms/step - accuracy:
0.8333 - loss: 0.6637​
Epoch 7/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy:
1.0000 - loss: 0.6535​
Epoch 8/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 60ms/step - accuracy:
1.0000 - loss: 0.6405​
Epoch 9/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 141ms/step - accuracy:
1.0000 - loss: 0.6241​
Epoch 10/10​
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 64ms/step - accuracy:
1.0000 - loss: 0.6032

You might also like