KEMBAR78
Aies Lab Manual | PDF | Algorithms | Applied Mathematics
0% found this document useful (0 votes)
5 views46 pages

Aies Lab Manual

The document outlines the vision and mission of Sri Manakula Vinayagar Engineering College and its Computer Science Department, focusing on quality education, research, employability, and ethical values. It details the objectives and outcomes of the Artificial Intelligence and Expert Systems Laboratory course, including various practical exercises and algorithms to be implemented by students. Additionally, it includes specific programming tasks such as the Graph Coloring Problem, Blocks World Problem, and Water Jug Problem, among others.

Uploaded by

ashwins1306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views46 pages

Aies Lab Manual

The document outlines the vision and mission of Sri Manakula Vinayagar Engineering College and its Computer Science Department, focusing on quality education, research, employability, and ethical values. It details the objectives and outcomes of the Artificial Intelligence and Expert Systems Laboratory course, including various practical exercises and algorithms to be implemented by students. Additionally, it includes specific programming tasks such as the Graph Coloring Problem, Blocks World Problem, and Water Jug Problem, among others.

Uploaded by

ashwins1306
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 46

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

ARTIFICIAL INTELLIGENCE AND EXPERT L T P C Hrs


U20CSP609 SYSTEMS LABORATORY
0 0 3 1 45

College Vision and Mission

VISION

To be globally recognized for excellence in quality education, innovation and research for the
transformation of lives to serve the society.

MISSION

M1: Quality Education: To provide comprehensive academic system that amalgamates the cutting edge
technologies with best practices.

M2: Research and Innovation: To foster value-based research and innovation in collaboration

with industries and institutions globally for creating intellectuals with new avenues.

M3: Employability and Entrepreneurship: To inculcate the employability and entrepreneurial skills through
value and skill based training.

M4: Ethical Values: To instill deep sense of human values by blending societal righteousness

with academic professionalism for the growth of society.

Department Vision and Mission

VISION

T create a productive learning and research environment for graduates to become highly dynamic,
competent, ethically responsible, professionally knowledgeable in the field of computer science and
engineering to meet the industrial needs on par with global standards.

MISSION

M1: Quality Education:Empowering the students with the necessary technical skills through quality
education to grow professionally.

M2: Innovative Research: Advocating the innovative research ideas by incorporating with industries for
developing products and services.

M3: Placement and Entrepreneurship: Advancing the education by strengthening the industry-academic
relationship trough hands-on training to seek placement in the top most industries or to develop a start-up.

M4: Ethics and Social Responsibilities: Stimulating professional behavior and good ethical values to
improve the leadership skills and social responsibilities.

P a g e | 1 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Register Number :
Name :
Subject Name / Subject Code :
Branch :
Year / Semester :

Certificate
Certified that this is the bonafide record of Practical work done by the above student in
the………………………………………….………… Laboratory during the academic
year……………………

Staff in-charge Head of the Department

Submitted for the End Semester Practical Examination held on…………

Internal Examiner External Examiner

P a g e | 2 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Course objectives
 To perform such intellectual tasks as decision making and planning.
 To implement searching algorithms
 To understand knowledge of reasoning and planning.
 To understand Bays Rule.
 To understand and apply various Machine Learning algorithms.

Course outcomes
After completion of the course, the students will be able to
CO1 - Analyze a problem and identify and define the computing requirements appropriate to its solution .
(K4)
CO2 - Apply various AI search algorithms. (K3)
CO3 - Demonstrate working knowledge of reasoning in the presence of incomplete and/or uncertain
information. (K3)
CO4 - Implement Bayesian classifier. (K3)
CO5 - Apply Machine Learning algorithms. (K3)

List of Exercises
1. Graph coloring problem
2. Blocks world problem
3. Water Jug Problem using DFS, BFS
4. Heuristic algorithms (A * Algorithm, best first search)
5. Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new sample
6. Build an Artificial Neural Network by implementing the Back propagation algorithm and test the same
using appropriate data sets.
7. Write a program to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. Compute the accuracy of the classifier, considering few test data sets.
8. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for clustering
using k-Means algorithm. Compare the results of these two algorithms and comment on the quality of
clustering. You can add Java/Python ML library classes/API in the program
9. Write a program to implement k-Nearest Neighbour algorithm to classify the iris data set. Print both
correct and wrong predictions. Java/Python ML library classes can be used for this problem.
10. Implement the non-parametric Locally Weighted Regression algorithm in order to fit data points.
Select appropriate data set for your experiment and draw graphs.

P a g e | 3 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:01
Graph Coloring Problem
DATE:
Aim:

To write a simple Python program for Graph Coloring problem

Algorithm:

Step 1. Adjacent Matrix

Step 2. Count the degree and define the possible color.

Step 3. Sort the node using selection sort for arranging the node from the largest to the lowest

degrees.

Step 4. The main process in the sorted Node and set the color with the possible colors in the

colorDict and then save to the Solution. After that, that color will remove from

colorDict because the color was used.

Step 5. Print from the SolutionDict and sort them by the name of the node.

Program Code:

# Adjacent Matrix

G = [[ 0, 1, 1, 0, 1, 0],

[ 1, 0, 1, 1, 0, 1],

[ 1, 1, 0, 1, 1, 0],

[ 0, 1, 1, 0, 0, 1],

[ 1, 0, 1, 0, 0, 1],

[ 0, 1, 0, 1, 1, 0]]

# inisiate the name of node.

node = "abcdef"

t_={}

for i in range(len(G)):

t_[node[i]] = i

# count degree of all node.

degree =[]

P a g e | 4 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

for i in range(len(G)): invoking the graph

degree.append(sum(G[i]))

# inisiate the posible color

colorDict = {}

for i in range(len(G)):

colorDict[node[i]]=["Blue","Red","Yellow","Green"]

# sort the node depends on the degree

sortedNode=[]

indeks = []

# use selection sort

for i in range(len(degree)):

_max = 0

j=0

for j in range(len(degree)):

if j not in indeks:

if degree[j] > _max:

_max = degree[j]

idx = j

indeks.append(idx)

sortedNode.append(node[idx])

# The main process

theSolution={}

for n in sortedNode:

setTheColor = colorDict[n]

theSolution[n] = setTheColor[0]

adjacentNode = G[t_[n]]

for j in range(len(adjacentNode)):

if adjacentNode[j]==1 and (setTheColor[0] in colorDict[node[j]]):

P a g e | 5 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

colorDict[node[j]].remove(setTheColor[0])

# Print the solution

for t,w in sorted(theSolution.items()):

print("Node",t," = ",w)

Output:

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for Graph Coloring problem was implemented and executed.

P a g e | 6 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:01 b)
TIC TAC TOE
DATE:
Aim:

To write a program to implement Tic Tac Toe game using Python

Algorithm:

P a g e | 7 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

P a g e | 8 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:02 a)
Blocks World Problem
DATE:
Aim:

To write a simple Python program for Blocks world program

Algorithm:

Step 1: Push the original goal on the stack.

Step 2: Repeat step a to d until the stack becomes empty.

1.If TOP is a compound goal, push its unfinished subgoals onto the stack.

2.If TOP is a single unfinished goal then, replace it with an action and push the action’s precondition on the
stack to satisfy the condition.

3.If TOP is an action,

1.Pop the action

2.Execute the action

3.update the knowledge base with effects of the action

Step 3: If TOP is a satisfied goal, pop it.

Program Code:

tab = []

result = []

goalList = ["a", "b", "c", "d", "e"]

defparSolution(N):

for i in range(N):

if goalList[i] != result[i]:

return False

return True

defOnblock(index, count):

# break point of recursive call

if count == len(goalList)+1:

P a g e | 9 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

return True

# copy tab of index value to result

block = tab[index]

# stack block

result.append(block)

print(result)

if parSolution(count):

print("Pushed a result solution ")

# remove block from tab

tab.remove(block)

Onblock(0, count + 1)

else:

print("result solution not possible, back to the tab")

# pop out if no partial solution

result.pop()

Onblock(index+1, count)

defOntab(problem):

# check if everything in stack is on the tab

if len(problem) != 0:

tab.append(problem.pop())

Ontab(problem)

# if everything is on the tab the we return true

else:
return True
defgoal_stack_planing(problem):
# pop problem and put in tab
Ontab(problem)
# print index and number of blocks on result stack
if Onblock(0, 1):

P a g e | 10 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

print(result)
if __name__ == "__main__":
problem = ["c", "a", "e", "d", "b"]
print("Goal Problem")
for k, j in zip(goalList, problem):
print(k+" "+j)
goal_stack_planing(problem)
print("result Solution")
print(result)
Output:

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for Blocks world problem was implemented and executed.

P a g e | 11 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:02b)
8-Tiles problem
DATE:
Aim:

To Write a Program to Implement 8-Puzzle problem using Python.

Algorithm:

P a g e | 12 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT

P a g e | 13 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:03 a)
Water Jug Problem using DFS, BFS
DATE:

Aim:

Develop water jug problem using searching algorithm.

Algorithm:

Rule 1: First we will fill the 4-gallon jug completely with water.

Rule 2: Fill the 3-gallon jug

Rule 3: Empty the 4-gallon jug on the ground.

Rule 4: Empty the 3-gallon jug on the ground.

Rule 5: Pour water from the 4-gallon jug into the 3-gallon jug until the 3-gallon jug is full.

Rule 6: pour water from the 3-gallon jug into the 4-gallon jug until the 4-gallon jug is full.

Rule 7: Pour all water from 4-gallon jug into the 3-gallon jug, until 4-gallon bug becomes empty.

Rule 8: Pour all water from the3-gallon jug into 4-gallon jug, until 3-gallon jug becomes empty.

Program code:

j1=0
j2=0
x=4
y=3
print("Initial state=(0,0)")
print("Capacities=(4,3)")
print("Goal state=(2,0)")
while j1 !=2:
r=int(input("enter rule:"))
if(r==1):
j1=x
elif(r==2):
j2=y
elif(r==3):
j1=0
elif(r==4):
j2=0
elif(r==5):
t=y-j2
j2=y
j1=j1-t
if j1<0:
j1=0
elif(r==6):

P a g e | 14 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

t=x-j1
j1=x
j2=j2-t
if j2<0:
j2=0
elif(r==7):
j2=j2+j1
j1=0
if j2>y:
j2=y
elif(r==8):
j1=j1+j2
j2=0
if j1>x:
j1=x
print(j1,j2)

Output:

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT

Thus the program for Water Jug Problem was implemented and executed.

P a g e | 15 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:03 b)
Travelling Salesperson Problem
DATE:

Aim:

To Write a Program to Implement Travelling Salesperson problem using Python.

Algorithm:

P a g e | 16 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT

P a g e | 17 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:04 a)
Heuristic Algorithms - A* algorithm
DATE:

Aim:

To find shortest path from one to another node using A* searching algorithms.

Algorithm:

Step 1: Given the graph, find the cost-effective path from A to G. That is A is the source node

and G is the goal node.

Step 2: Now from A, we can go to point B or E, so we compute f(x) for each of them,

A → B = g(B) + h(B) = 2 + 6 = 8

A → E = g(E) + h(E) = 3 + 7 = 10

Since the cost for A → B is less, we move forward with this path and compute the f(x)

for the children nodes of B.

Now from B, we can go to point C or G, so we compute f(x) for each of them,

Step 3: A → B → C = (2 + 1) + 99= 102

A → B → G = (2 + 9) + 0 = 11

Here the path A → B → G has the least cost but it is still more than the cost of A → E, thus

we explore this path further.

Step 4: Now from E, we can go to point D, so we compute f(x),

A → E → D = (3 + 6) + 1 = 10

Step 5: Comparing the cost of A → E → D with all the paths we got so far and as this cost is

least of all we move forward with this path.Now compute the f(x) for the children of D

A → E → D → G = (3 + 6 + 1) +0 = 10

Now comparing all the paths that lead us to the goal, we conclude that A → E → D → G is the most cost-
effective path to get from A to G.

Program Code:

P a g e | 18 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

defaStarAlgo(start_node, stop_node):

open_set = set(start_node)

closed_set = set()

g = {} #store distance from starting node

parents = {} # parents contains an adjacency map of all nodes

#distance of starting node from itself is zero

g[start_node] = 0

#start_node is root node i.e it has no parent nodes

#so start_node is set to its own parent node

parents[start_node] = start_node

while len(open_set) > 0:

n = None

#node with lowest f() is found

for v in open_set:

if n == None or g[v] + heuristic(v) < g[n] + heuristic(n):

n=v

if n == stop_node or Graph_nodes[n] == None:

pass

else:

for (m, weight) in get_neighbors(n):

#nodes 'm' not in first and last set are added to first

#n is set its parent

if m not in open_set and m not in closed_set:

open_set.add(m)

parents[m] = n

g[m] = g[n] + weight

#for each node m,compare its distance from start i.e g(m) to the

#from start through n node

P a g e | 19 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

else:

if g[m] > g[n] + weight:

#update g(m)

g[m] = g[n] + weight

#change parent of m to n

parents[m] = n

#if m in closed set,remove and add to open

if m in closed_set:

closed_set.remove(m)

open_set.add(m)

if n == None:

print('Path does not exist!')

return None

# if the current node is the stop_node

# then we begin reconstructin the path from it to the start_node

if n == stop_node:

path = []

while parents[n] != n:

path.append(n)

n = parents[n]

path.append(start_node)

path.reverse()

print('Path found: {}'.format(path))

return path

# remove n from the open_list, and add it to closed_list

# because all of his neighbors were inspected

open_set.remove(n)

closed_set.add(n)

P a g e | 20 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

print('Path does not exist!')

return None

#define fuction to return neighbor and its distance

#from the passed node

defget_neighbors(v):

if v in Graph_nodes:

return Graph_nodes[v]

else:

return None

#for simplicity we ll consider heuristic distances given

#and this function returns heuristic distance for all nodes

def heuristic(n):

H_dist = {

'A': 11,

'B': 6,

'C': 99,

'D': 1,

'E': 7,

'G': 0,

return H_dist[n]

#Describe your graph here

Graph_nodes = {

'A': [('B', 2), ('E', 3)],

'B': [('A', 2), ('C', 1), ('G', 9)],

'C': [('B', 1)],

'D': [('E', 6), ('G', 1)],

P a g e | 21 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

'E': [('A', 3), ('D', 6)],

'G': [('B', 9), ('D', 1)]

aStarAlgo('A', 'G')

Output:

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the Program for A* algorithm was implemented and executed.

P a g e | 22 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:04 b)
Hill Climbing Algorithm
DATE:

Aim:

To Write a program to implement Hill Climbing Algorithm

Algorithm:

P a g e | 23 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

P a g e | 24 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:05 Write a program to demonstrate the working of the decision tree based ID3
algorithm. Use an appropriate data set for building the decision tree and
DATE:
apply this knowledge to classify a new sample.

ID3 ALGORITHM:

AIM:

To develop the working of the decision tree based ID3 algorithm. Use an appropriate data set for
building the decision tree and apply this knowledge to classify a new sample.

SOURCE CODE:

import numpy as np
import math

from Data_loader import read_data


class Node:
def __init__(self, attribute):
self.attribute = attribute
self.children = []
self.answer = ""
def __str__(self):
return self.attribute
def subtables(data, col, delete):
dict = {}
items = np.unique(data[:, col])
count = np.zeros((items.shape[0], 1), dtype=np.int32)

P a g e | 25 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

for x in range(items.shape[0]):
for y in range(data.shape[0]):
if data[y, col] == items[x]:
count[x] += 1
for x in range(items.shape[0]):
dict[items[x]] = np.empty((int(count[x]), data.shape[1]), dtype="|S32")
pos = 0
for y in range(data.shape[0]):
if data[y, col] == items[x]:
dict[items[x]][pos] = data[y]
pos += 1
if delete:
dict[items[x]] = np.delete(dict[items[x]], col, 1)
return items, dict

def entropy(S):
items = np.unique(S)
if items.size == 1:
return 0
counts = np.zeros((items.shape[0], 1))
sums = 0

for x in range(items.shape[0]):
counts[x] = sum(S == items[x]) / (S.size * 1.0)
for count in counts:
sums += -1 * count * math.log(count, 2)
return sums

def gain_ratio(data, col):


items, dict = subtables(data, col, delete=False)
total_size = data.shape[0]
entropies = np.zeros((items.shape[0], 1))
intrinsic = np.zeros((items.shape[0], 1))

for x in range(items.shape[0]):
ratio = dict[items[x]].shape[0]/(total_size * 1.0)
entropies[x] = ratio * entropy(dict[items[x]][:, -1])
intrinsic[x] = ratio * math.log(ratio, 2)
total_entropy = entropy(data[:, -1])
iv = -1 * sum(intrinsic)

for x in range(entropies.shape[0]):
total_entropy -= entropies[x]
return total_entropy / iv

def create_node(data, metadata):

if (np.unique(data[:, -1])).shape[0] == 1:
node = Node("")
node.answer = np.unique(data[:, -1])[0]
return node

P a g e | 26 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

gains = np.zeros((data.shape[1] - 1, 1))

for col in range(data.shape[1] - 1):


gains[col] = gain_ratio(data, col)
split = np.argmax(gains)

node = Node(metadata[split])
metadata = np.delete(metadata, split, 0)

items, dict = subtables(data, split, delete=True)

for x in range(items.shape[0]):
child = create_node(dict[items[x]], metadata)
node.children.append((items[x], child))
return node

def empty(size):
s = ""
for x in range(size):
s += " "
return s

def print_tree(node, level):


if node.answer != "":
print(empty(level), node.answer)
return

print(empty(level), node.attribute)

for value, n in node.children:


print(empty(level + 1), value)
print_tree(n, level + 2)

metadata, traindata = read_data("PlayTennis.csv")


data = np.array(traindata)
node = create_node(data, metadata)
print_tree(node, 0)

OUTPUT:
outlook
overcast
b'yes'
rain
wind
b'strong'
b'no'
b'weak'
b'yes'
sunny

P a g e | 27 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

humidity
b'high'
b'no'
b'normal'
b'yes'

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for working of the decision tree based ID3 algorithm. Using appropriate data set
for building the decision tree and apply this knowledge to classify a new sample was implemented and
executed.

P a g e | 28 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:06 Build an Artificial Neural Network by implementing the Back propagation


algorithm and test the same using appropriate data sets.
DATE:

AIM:

To develop the program for Artificial Neural Network by implementing the Back propagation
algorithm and test the same using appropriate data sets.

Back propagation Algorithm:


1.Load data set
2. Assign all network inputs and output
3.Initialize all weights with small random numbers, typically between -1 and 1
repeat
for every pattern in the training set
Present the pattern to the network
// Propagated the input forward through the network:
for each layer in the network
for every node in the layer
1. Calculate the weight sum of the inputs to the node
2. Add the threshold to the sum
3. Calculate the activation for the node
end
end
// Propagate the errors backward through the network
for every node in the output layer
calculate the error signal
end

P a g e | 29 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

for all hidden layers


for every node in the layer
1. Calculate the node's signal error
2. Update each node's weight in the network
end
end
// Calculate Global Error
Calculate the Error Function
end
while ((maximum number of iterations < than specified) AND
(Error Function is > than specified))

 Input layer with two inputs neurons


 One hidden layer with two neurons
 Output layer with a single neuron

SOURCE CODE:

import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)
y = np.array(([92], [86], [89]), dtype=float)
X = X/np.amax(X,axis=0) # maximum of X array longitudinally
y = y/100

#Sigmoid Function
def sigmoid (x):
return 1/(1 + np.exp(-x))

#Derivative of Sigmoid Function


def derivatives_sigmoid(x):
return x * (1 - x)

#Variable initialization
epoch=7000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set

P a g e | 30 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

hiddenlayer_neurons = 3 #number of hidden layers neurons


output_neurons = 1 #number of neurons at output layer
#weight and bias initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bout=np.random.uniform(size=(1,output_neurons))
#draws a random range of numbers uniformly of dim x*y
for i in range(epoch):

#Forward Propogation
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)

#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act)#how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr# dotproduct of nextlayererror and currentlayerop
# bout += np.sum(d_output, axis=0,keepdims=True) *lr
wh += X.T.dot(d_hiddenlayer) *lr
#bh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n" ,output)

SAMPLE OUTPUT:

Input:
[[ 0.66666667 1. ]
[ 0.33333333 0.55555556]
[ 1. 0.66666667]]
Actual Output:
[[ 0.92]
[ 0.86]
[ 0.89]]
Predicted Output:
[[ 0.89559591]
P a g e | 31 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE
SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

[ 0.88142069]
[ 0.8928407]]

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for Artificial Neural Network by implementing the Back propagation algorithm
and test the same using appropriate data sets was implemented and executed.

P a g e | 32 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:07 Write a program to implement the naïve Bayesian classifier for a sample training
data set stored as a .CSV file. Compute the accuracy of the classifier, considering
DATE:
few test data sets.

AIM:

To develop to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. Compute the accuracy of the classifier, considering few test data sets.

Problem statement:

– Given features X1 ,X2 ,…,Xn


– Predict a label Y
X = (Rainy, Hot, High, False)
y = No

 P(H) is the probability of hypothesis H being true. This is known as the prior probability.
 P(E) is the probability of the evidence(regardless of the hypothesis).
 P(E|H) is the probability of the evidence given that hypothesis is true.
 P(H|E) is the probability of the hypothesis given that the evidence is there.

Prior, conditional and joint probability for random variables


 Prior probability:
 Conditional probability:
 Joint probability:
 Relationship:
 Independence:

Naïve Bayesian classifier Algorthim:


Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and
probability of playing is 0.64.

P a g e | 33 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the
highest posterior probability is the outcome of prediction.

SOURCE CODE
import csv
import random
import math
def loadCsv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
def splitDataset(dataset, splitRatio):
trainSize = int(len(dataset) * splitRatio)
trainSet = []
copy = list(dataset)
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return [trainSet, copy]
def separateByClass(dataset):
separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
def mean(numbers):
return sum(numbers)/float(len(numbers))
def stdev(numbers):
avg = mean(numbers)
variance = sum([pow(x-avg,2) for x in numbers])/float(len(numbers)-1)

P a g e | 34 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

return math.sqrt(variance)
def summarize(dataset):
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1]
return summaries
def summarizeByClass(dataset):
separated = separateByClass(dataset)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = summarize(instances)
return summaries
def calculateProbability(x, mean, stdev):
exponent = math.exp(-(math.pow(x-mean,2)/(2*math.pow(stdev,2))))
return (1 / (math.sqrt(2*math.pi) * stdev)) * exponent
def calculateClassProbabilities(summaries, inputVector):
probabilities = {}
for classValue, classSummaries in summaries.items():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, stdev = classSummaries[i]
x = inputVector[i]
probabilities[classValue] *= calculateProbability(x, mean, stdev)
return probabilities
def predict(summaries, inputVector):
probabilities = calculateClassProbabilities(summaries, inputVector)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.items():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
def getPredictions(summaries, testSet):
predictions = []
for i in range(len(testSet)):
result = predict(summaries, testSet[i])
predictions.append(result)
return predictions
def getAccuracy(testSet, predictions):
correct = 0
for i in range(len(testSet)):
if testSet[i][-1] == predictions[i]:
correct += 1
return (correct/float(len(testSet))) * 100.0
def main():
filename = 'data.csv'

P a g e | 35 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

splitRatio = 0.67
dataset = loadCsv(filename)
trainingSet, testSet = splitDataset(dataset, splitRatio)
print('Split {0} rows into train={1} and test={2} rows'.format(len(dataset),
len(trainingSet), len(testSet)))
# prepare model
summaries = summarizeByClass(trainingSet)
# test model
predictions = getPredictions(summaries, testSet)
accuracy = getAccuracy(testSet, predictions)
print('Accuracy: {0}%'.format(accuracy))
main()

SAMPLE OUTPUT:

Split 306 rows into train=205 and test=101 rows


Accuracy: 72.27722772277228%

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. And to compute the accuracy of the classifier, considering few test data sets was implemented
and executed.

P a g e | 36 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:8 Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same
data set for clustering using k-Means algorithm. Compare the results of these
DATE: two algorithms and comment on the quality of clustering. You can add
Java/Python ML library classes/API in the program
AIM:

To develop a program for Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the
same data set for clustering using k-Means algorithm. Compare the results of these two algorithms and
comment on the quality of clustering. You can add Java/Python ML library classes/API in the program

EM algorithm:
These are the two basic steps of the EM algorithm, namely E Step or Expectation
Step or Estimation Step and M Step or Maximization Step.
 Estimation step:
 initialize µk ∑k, and πk by some random values, or by K means clustering results or by
hierarchical clustering results.
 Then for those given parameter values, estimate the value of the latent variables (y k)
 Maximization Step:
 Update the value of the parameters( i.e. µk ∑k, and πk, and ) calculated using ML method.
1. Load data set
2. Initialize the mean µk, the covariance matrix ∑k and the mixing coefficients
1. πk by some random values. (or other values)
3. Compute the yk values for all k.
4. Again Estimate all the parameters using the current yk values.
5. Compute log-likelihood function.
6. Put some convergence criterion
7. If the log-likelihood value converges to some value ( or if all the parameters converge
to some values ) then stop, else return to Step 3.

SOURCE CODE:

import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.mixture import GaussianMixture
import pandas as pd
X=pd.read_csv("kmeansdata.csv")
x1 = X['Distance_Feature'].values
x2 = X['Speeding_Feature'].values
X = np.array(list(zip(x1, x2))).reshape(len(x1), 2)
plt.plot()
plt.xlim([0, 100])
plt.ylim([0, 50])
plt.title('Dataset')

P a g e | 37 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

plt.scatter(x1, x2)
plt.show()
#code for EM
gmm = GaussianMixture(n_components=3)
gmm.fit(X)
em_predictions = gmm.predict(X)
print("\nEM predictions")
print(em_predictions)
print("mean:\n",gmm.means_)
print('\n')
print("Covariances\n",gmm.covariances_)
print(X)
plt.title('Exceptation Maximum')
plt.scatter(X[:,0], X[:,1],c=em_predictions,s=50)
plt.show()
#code for Kmeans
import matplotlib.pyplot as plt1
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
print(kmeans.cluster_centers_)
print(kmeans.labels_)
plt.title('KMEANS')
plt1.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='rainbow')
plt1.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1], color='black')

OUTPUT

EM predictions
[0 0 0 1 0 1 1 1 2 1 2 2 1 1 2 1 2 1 0 1 0 1 1]
mean:
[[57.70629058 25.73574491]
[52.12044022 22.46250453]
[46.4364858 39.43288647]]
Covariances
[[[83.51878796 14.926902 ]
[14.926902 2.70846907]]
[[29.95910352 15.83416554]
P a g e | 38 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE
SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

[15.83416554 67.01175729]]
[[79.34811849 29.55835938]
[29.55835938 18.17157304]]]
[[71.24 28. ]
[52.53 25. ]
[64.54 27. ]
[55.69 22. ]
[54.58 25. ]
[41.91 10. ]
[58.64 20. ]
[52.02 8. ]
[31.25 34. ]
[44.31 19. ]
[49.35 40. ]
[58.07 45. ]
[44.22 22. ]
[55.73 19. ]
[46.63 43. ]
[52.97 32. ]
[46.25 35. ]
[51.55 27. ]
[57.05 26. ]
[58.45 30. ]
[43.42 23. ]
[55.68 37. ]
[55.15 18. ]

centroid and predications


[[57.74090909 24.27272727]
[48.6 38. ]
[45.176 16.4 ]]
[0 0 0 0 0 2 0 2 1 2 1 1 2 0 1 1 1 0 0 0 2 1 0]

P a g e | 39 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for EM and K Means was implemented and executed.

P a g e | 40 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:09 Write a program to implement k-Nearest Neighbour algorithm to classify the


iris data set. Print both correct and wrong predictions. Java/Python ML library
DATE:
classes can be used for this problem.

AIM:

To Develop a Program to implement k-Nearest Neighbour algorithm to classify the iris data set.
Print both correct and wrong predictions. Java/Python ML library classes can be used for this problem.

K-Nearest-Neighbour Algorithm:

1. Load the data


2. Initialize the value of k
3. For getting the predicted class, iterate from 1 to total number of training data points
1. Calculate the distance between test data and each row of training data. Here we will
use Euclidean distance as our distance metric since it’s the most popular method.
The other metrics that can be used are Chebyshev, cosine, etc.
2. Sort the calculated distances in ascending order based on distance values
3. Get top k rows from the sorted array
4. Get the most frequent class of these rows i.e Get the labels of the selected K entries
5. Return the predicted class
 If regression, return the mean of the K labels
 If classification, return the mode of the K labels.

Confusion matrix:
Note,
• Class 1 : Positive
• Class 2 : Negative
• Positive (P) : Observation is positive (for example: is an apple).
• Negative (N) : Observation is not positive (for example: is not an apple).
• True Positive (TP) : Observation is positive, and is predicted to be positive.
• False Negative (FN) : Observation is positive, but is predicted negative. (Also known as a
"Type II error.")
• True Negative (TN) : Observation is negative, and is predicted to be negative.
• False Positive (FP) : Observation is negative, but is predicted positive. (Also known as a
"Type I error.")

P a g e | 41 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

SOURCE CODE:

from sklearn.neighbors import KNeighborsClassifier


from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
import pandas as pd
dataset=pd.read_csv("iris.csv")
X_train,X_test,y_train,y_test=train_test_split(X,y,random_state=0,test_size=0.25)

classifier=KNeighborsClassifier(n_neighbors=8,p=3,metric='euclidean')
classifier.fit(X_train,y_train)

#predict the test resuts


y_pred=classifier.predict(X_test)

cm=confusion_matrix(y_test,y_pred)
print('Confusion matrix is as follows\n',cm)
print('Accuracy Metrics')
print(classification_report(y_test,y_pred))
print(" correct predicition",accuracy_score(y_test,y_pred))
print(" worng predicition",(1-accuracy_score(y_test,y_pred)))
SAMPLE OUTPUT:

Confusion matrix is as follows


[[13 0 0]
[ 0 15 1]
[ 0 0 9]]
Accuracy Metrics
precision recall f1-score support
Iris-setosa 1.00 1.00 1.00 13
Iris-versicolor 1.00 0.94 0.97 16
Iris-virginica 0.90 1.00 0.95 9
avg / total 0.98 0.97 0.97 38
correct predicition 0.9736842105263158
worng predicition 0.02631578947368418

P a g e | 42 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus the program for k-Nearest Neighbour algorithm was implemented and executed

P a g e | 43 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

EX.NO:10 Implement the non-parametric Locally Weighted Regression algorithm in


order to fit data points. Select appropriate data set for your experiment and
DATE:
draw graphs.

AIM:

To develop a program for the non-parametric Locally Weighted Regression algorithm in order to fit
data points. Select appropriate data set for your experiment and draw graphs.

SOURCE CODE:
import numpy as np
from bokeh.plotting import figure, show, output_notebook
from bokeh.layouts import gridplot
from bokeh.io import push_notebook
def local_regression(x0, X, Y, tau):
# add bias term
x0 = np.r_[1, x0] # Add one to avoid the loss in information
X = np.c_[np.ones(len(X)), X]
# fit model: normal equations with kernel
xw = X.T * radial_kernel(x0, X, tau) # XTranspose * W
beta = np.linalg.pinv(xw @ X) @ xw @ Y # @ Matrix Multiplication or Dot Product
# predict value
return x0 @ beta # @ Matrix Multiplication or Dot Product for prediction
def radial_kernel(x0, X, tau):
return np.exp(np.sum((X - x0) ** 2, axis=1) / (-2 * tau * tau))
# Weight or Radial Kernal Bias Function
n = 1000
# generate dataset
X = np.linspace(-3, 3, num=n)
print("The Data Set ( 10 Samples) X :\n",X[1:10])
Y = np.log(np.abs(X ** 2 - 1) + .5)
print("The Fitting Curve Data Set (10 Samples) Y :\n",Y[1:10])
# jitter X
X += np.random.normal(scale=.1, size=n)
print("Normalised (10 Samples) X :\n",X[1:10])
domain = np.linspace(-3, 3, num=300)
print(" Xo Domain Space(10 Samples) :\n",domain[1:10])
def plot_lwr(tau):
# prediction through regression
prediction = [local_regression(x0, X, Y, tau) for x0 in domain]
plot = figure(plot_width=400, plot_height=400)
plot.title.text='tau=%g' % tau
plot.scatter(X, Y, alpha=.3)

P a g e | 44 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

plot.line(domain, prediction, line_width=2, color='red')


return plot
# Plotting the curves with different tau
show(gridplot([
[plot_lwr(10.), plot_lwr(1.)],
[plot_lwr(0.1), plot_lwr(0.01)]
]))
SAMPLE OUTPUT:

The Data Set ( 10 Samples) X :


[-2.99399399 -2.98798799 -2.98198198 -2.97597598 -2.96996997 -2.96396396
-2.95795796 -2.95195195 -2.94594595]
The Fitting Curve Data Set (10 Samples) Y :
[2.13582188 2.13156806 2.12730467 2.12303166 2.11874898 2.11445659
2.11015444 2.10584249 2.10152068]
Normalised (10 Samples) X :
[-3.10518137 -3.00247603 -2.9388515 -2.79373602 -2.84946247 -2.85313888
-2.9622708 -3.09679502 -2.69778859]
Xo Domain Space(10 Samples) :
[-2.97993311 -2.95986622 -2.93979933 -2.91973244 -2.89966555 -2.87959866
-2.85953177 -2.83946488 -2.81939799]

P a g e | 45 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

S.No Particulars Max Marks Marks Obtained

1. Aim and Algorithm 5

2. Program and Execution 10

3. Viva 10

4. Total 25

RESULT:

Thus, an regression algorithm was implemented and executed successfully.

P a g e | 46 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE

You might also like