Aies Lab Manual
Aies Lab Manual
VISION
To be globally recognized for excellence in quality education, innovation and research for the
transformation of lives to serve the society.
MISSION
M1: Quality Education: To provide comprehensive academic system that amalgamates the cutting edge
technologies with best practices.
M2: Research and Innovation: To foster value-based research and innovation in collaboration
with industries and institutions globally for creating intellectuals with new avenues.
M3: Employability and Entrepreneurship: To inculcate the employability and entrepreneurial skills through
value and skill based training.
M4: Ethical Values: To instill deep sense of human values by blending societal righteousness
VISION
T create a productive learning and research environment for graduates to become highly dynamic,
competent, ethically responsible, professionally knowledgeable in the field of computer science and
engineering to meet the industrial needs on par with global standards.
MISSION
M1: Quality Education:Empowering the students with the necessary technical skills through quality
education to grow professionally.
M2: Innovative Research: Advocating the innovative research ideas by incorporating with industries for
developing products and services.
M3: Placement and Entrepreneurship: Advancing the education by strengthening the industry-academic
relationship trough hands-on training to seek placement in the top most industries or to develop a start-up.
M4: Ethics and Social Responsibilities: Stimulating professional behavior and good ethical values to
improve the leadership skills and social responsibilities.
Register Number :
Name :
Subject Name / Subject Code :
Branch :
Year / Semester :
Certificate
Certified that this is the bonafide record of Practical work done by the above student in
the………………………………………….………… Laboratory during the academic
year……………………
Course objectives
To perform such intellectual tasks as decision making and planning.
To implement searching algorithms
To understand knowledge of reasoning and planning.
To understand Bays Rule.
To understand and apply various Machine Learning algorithms.
Course outcomes
After completion of the course, the students will be able to
CO1 - Analyze a problem and identify and define the computing requirements appropriate to its solution .
(K4)
CO2 - Apply various AI search algorithms. (K3)
CO3 - Demonstrate working knowledge of reasoning in the presence of incomplete and/or uncertain
information. (K3)
CO4 - Implement Bayesian classifier. (K3)
CO5 - Apply Machine Learning algorithms. (K3)
List of Exercises
1. Graph coloring problem
2. Blocks world problem
3. Water Jug Problem using DFS, BFS
4. Heuristic algorithms (A * Algorithm, best first search)
5. Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an
appropriate data set for building the decision tree and apply this knowledge to classify a new sample
6. Build an Artificial Neural Network by implementing the Back propagation algorithm and test the same
using appropriate data sets.
7. Write a program to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. Compute the accuracy of the classifier, considering few test data sets.
8. Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same data set for clustering
using k-Means algorithm. Compare the results of these two algorithms and comment on the quality of
clustering. You can add Java/Python ML library classes/API in the program
9. Write a program to implement k-Nearest Neighbour algorithm to classify the iris data set. Print both
correct and wrong predictions. Java/Python ML library classes can be used for this problem.
10. Implement the non-parametric Locally Weighted Regression algorithm in order to fit data points.
Select appropriate data set for your experiment and draw graphs.
EX.NO:01
Graph Coloring Problem
DATE:
Aim:
Algorithm:
Step 3. Sort the node using selection sort for arranging the node from the largest to the lowest
degrees.
Step 4. The main process in the sorted Node and set the color with the possible colors in the
colorDict and then save to the Solution. After that, that color will remove from
Step 5. Print from the SolutionDict and sort them by the name of the node.
Program Code:
# Adjacent Matrix
G = [[ 0, 1, 1, 0, 1, 0],
[ 1, 0, 1, 1, 0, 1],
[ 1, 1, 0, 1, 1, 0],
[ 0, 1, 1, 0, 0, 1],
[ 1, 0, 1, 0, 0, 1],
[ 0, 1, 0, 1, 1, 0]]
node = "abcdef"
t_={}
for i in range(len(G)):
t_[node[i]] = i
degree =[]
degree.append(sum(G[i]))
colorDict = {}
for i in range(len(G)):
colorDict[node[i]]=["Blue","Red","Yellow","Green"]
sortedNode=[]
indeks = []
for i in range(len(degree)):
_max = 0
j=0
for j in range(len(degree)):
if j not in indeks:
_max = degree[j]
idx = j
indeks.append(idx)
sortedNode.append(node[idx])
theSolution={}
for n in sortedNode:
setTheColor = colorDict[n]
theSolution[n] = setTheColor[0]
adjacentNode = G[t_[n]]
for j in range(len(adjacentNode)):
colorDict[node[j]].remove(setTheColor[0])
print("Node",t," = ",w)
Output:
3. Viva 10
4. Total 25
RESULT:
Thus the program for Graph Coloring problem was implemented and executed.
EX.NO:01 b)
TIC TAC TOE
DATE:
Aim:
Algorithm:
3. Viva 10
4. Total 25
RESULT:
EX.NO:02 a)
Blocks World Problem
DATE:
Aim:
Algorithm:
1.If TOP is a compound goal, push its unfinished subgoals onto the stack.
2.If TOP is a single unfinished goal then, replace it with an action and push the action’s precondition on the
stack to satisfy the condition.
Program Code:
tab = []
result = []
defparSolution(N):
for i in range(N):
if goalList[i] != result[i]:
return False
return True
defOnblock(index, count):
if count == len(goalList)+1:
return True
block = tab[index]
# stack block
result.append(block)
print(result)
if parSolution(count):
tab.remove(block)
Onblock(0, count + 1)
else:
result.pop()
Onblock(index+1, count)
defOntab(problem):
if len(problem) != 0:
tab.append(problem.pop())
Ontab(problem)
else:
return True
defgoal_stack_planing(problem):
# pop problem and put in tab
Ontab(problem)
# print index and number of blocks on result stack
if Onblock(0, 1):
print(result)
if __name__ == "__main__":
problem = ["c", "a", "e", "d", "b"]
print("Goal Problem")
for k, j in zip(goalList, problem):
print(k+" "+j)
goal_stack_planing(problem)
print("result Solution")
print(result)
Output:
3. Viva 10
4. Total 25
RESULT:
Thus the program for Blocks world problem was implemented and executed.
EX.NO:02b)
8-Tiles problem
DATE:
Aim:
Algorithm:
3. Viva 10
4. Total 25
RESULT
EX.NO:03 a)
Water Jug Problem using DFS, BFS
DATE:
Aim:
Algorithm:
Rule 1: First we will fill the 4-gallon jug completely with water.
Rule 5: Pour water from the 4-gallon jug into the 3-gallon jug until the 3-gallon jug is full.
Rule 6: pour water from the 3-gallon jug into the 4-gallon jug until the 4-gallon jug is full.
Rule 7: Pour all water from 4-gallon jug into the 3-gallon jug, until 4-gallon bug becomes empty.
Rule 8: Pour all water from the3-gallon jug into 4-gallon jug, until 3-gallon jug becomes empty.
Program code:
j1=0
j2=0
x=4
y=3
print("Initial state=(0,0)")
print("Capacities=(4,3)")
print("Goal state=(2,0)")
while j1 !=2:
r=int(input("enter rule:"))
if(r==1):
j1=x
elif(r==2):
j2=y
elif(r==3):
j1=0
elif(r==4):
j2=0
elif(r==5):
t=y-j2
j2=y
j1=j1-t
if j1<0:
j1=0
elif(r==6):
t=x-j1
j1=x
j2=j2-t
if j2<0:
j2=0
elif(r==7):
j2=j2+j1
j1=0
if j2>y:
j2=y
elif(r==8):
j1=j1+j2
j2=0
if j1>x:
j1=x
print(j1,j2)
Output:
3. Viva 10
4. Total 25
RESULT
Thus the program for Water Jug Problem was implemented and executed.
EX.NO:03 b)
Travelling Salesperson Problem
DATE:
Aim:
Algorithm:
3. Viva 10
4. Total 25
RESULT
EX.NO:04 a)
Heuristic Algorithms - A* algorithm
DATE:
Aim:
To find shortest path from one to another node using A* searching algorithms.
Algorithm:
Step 1: Given the graph, find the cost-effective path from A to G. That is A is the source node
Step 2: Now from A, we can go to point B or E, so we compute f(x) for each of them,
A → B = g(B) + h(B) = 2 + 6 = 8
A → E = g(E) + h(E) = 3 + 7 = 10
Since the cost for A → B is less, we move forward with this path and compute the f(x)
A → B → G = (2 + 9) + 0 = 11
Here the path A → B → G has the least cost but it is still more than the cost of A → E, thus
A → E → D = (3 + 6) + 1 = 10
Step 5: Comparing the cost of A → E → D with all the paths we got so far and as this cost is
least of all we move forward with this path.Now compute the f(x) for the children of D
A → E → D → G = (3 + 6 + 1) +0 = 10
Now comparing all the paths that lead us to the goal, we conclude that A → E → D → G is the most cost-
effective path to get from A to G.
Program Code:
defaStarAlgo(start_node, stop_node):
open_set = set(start_node)
closed_set = set()
g[start_node] = 0
parents[start_node] = start_node
n = None
for v in open_set:
n=v
pass
else:
#nodes 'm' not in first and last set are added to first
open_set.add(m)
parents[m] = n
#for each node m,compare its distance from start i.e g(m) to the
else:
#update g(m)
#change parent of m to n
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
return None
if n == stop_node:
path = []
while parents[n] != n:
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
return path
open_set.remove(n)
closed_set.add(n)
return None
defget_neighbors(v):
if v in Graph_nodes:
return Graph_nodes[v]
else:
return None
def heuristic(n):
H_dist = {
'A': 11,
'B': 6,
'C': 99,
'D': 1,
'E': 7,
'G': 0,
return H_dist[n]
Graph_nodes = {
aStarAlgo('A', 'G')
Output:
3. Viva 10
4. Total 25
RESULT:
EX.NO:04 b)
Hill Climbing Algorithm
DATE:
Aim:
Algorithm:
3. Viva 10
4. Total 25
RESULT:
EX.NO:05 Write a program to demonstrate the working of the decision tree based ID3
algorithm. Use an appropriate data set for building the decision tree and
DATE:
apply this knowledge to classify a new sample.
ID3 ALGORITHM:
AIM:
To develop the working of the decision tree based ID3 algorithm. Use an appropriate data set for
building the decision tree and apply this knowledge to classify a new sample.
SOURCE CODE:
import numpy as np
import math
for x in range(items.shape[0]):
for y in range(data.shape[0]):
if data[y, col] == items[x]:
count[x] += 1
for x in range(items.shape[0]):
dict[items[x]] = np.empty((int(count[x]), data.shape[1]), dtype="|S32")
pos = 0
for y in range(data.shape[0]):
if data[y, col] == items[x]:
dict[items[x]][pos] = data[y]
pos += 1
if delete:
dict[items[x]] = np.delete(dict[items[x]], col, 1)
return items, dict
def entropy(S):
items = np.unique(S)
if items.size == 1:
return 0
counts = np.zeros((items.shape[0], 1))
sums = 0
for x in range(items.shape[0]):
counts[x] = sum(S == items[x]) / (S.size * 1.0)
for count in counts:
sums += -1 * count * math.log(count, 2)
return sums
for x in range(items.shape[0]):
ratio = dict[items[x]].shape[0]/(total_size * 1.0)
entropies[x] = ratio * entropy(dict[items[x]][:, -1])
intrinsic[x] = ratio * math.log(ratio, 2)
total_entropy = entropy(data[:, -1])
iv = -1 * sum(intrinsic)
for x in range(entropies.shape[0]):
total_entropy -= entropies[x]
return total_entropy / iv
if (np.unique(data[:, -1])).shape[0] == 1:
node = Node("")
node.answer = np.unique(data[:, -1])[0]
return node
node = Node(metadata[split])
metadata = np.delete(metadata, split, 0)
for x in range(items.shape[0]):
child = create_node(dict[items[x]], metadata)
node.children.append((items[x], child))
return node
def empty(size):
s = ""
for x in range(size):
s += " "
return s
print(empty(level), node.attribute)
OUTPUT:
outlook
overcast
b'yes'
rain
wind
b'strong'
b'no'
b'weak'
b'yes'
sunny
humidity
b'high'
b'no'
b'normal'
b'yes'
3. Viva 10
4. Total 25
RESULT:
Thus the program for working of the decision tree based ID3 algorithm. Using appropriate data set
for building the decision tree and apply this knowledge to classify a new sample was implemented and
executed.
AIM:
To develop the program for Artificial Neural Network by implementing the Back propagation
algorithm and test the same using appropriate data sets.
SOURCE CODE:
import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)
y = np.array(([92], [86], [89]), dtype=float)
X = X/np.amax(X,axis=0) # maximum of X array longitudinally
y = y/100
#Sigmoid Function
def sigmoid (x):
return 1/(1 + np.exp(-x))
#Variable initialization
epoch=7000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = 2 #number of features in data set
#Forward Propogation
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)
#Backpropagation
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act)#how much hidden layer wts contributed to error
d_hiddenlayer = EH * hiddengrad
wout += hlayer_act.T.dot(d_output) *lr# dotproduct of nextlayererror and currentlayerop
# bout += np.sum(d_output, axis=0,keepdims=True) *lr
wh += X.T.dot(d_hiddenlayer) *lr
#bh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n" ,output)
SAMPLE OUTPUT:
Input:
[[ 0.66666667 1. ]
[ 0.33333333 0.55555556]
[ 1. 0.66666667]]
Actual Output:
[[ 0.92]
[ 0.86]
[ 0.89]]
Predicted Output:
[[ 0.89559591]
P a g e | 31 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE
SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE
[ 0.88142069]
[ 0.8928407]]
3. Viva 10
4. Total 25
RESULT:
Thus the program for Artificial Neural Network by implementing the Back propagation algorithm
and test the same using appropriate data sets was implemented and executed.
EX.NO:07 Write a program to implement the naïve Bayesian classifier for a sample training
data set stored as a .CSV file. Compute the accuracy of the classifier, considering
DATE:
few test data sets.
AIM:
To develop to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. Compute the accuracy of the classifier, considering few test data sets.
Problem statement:
P(H) is the probability of hypothesis H being true. This is known as the prior probability.
P(E) is the probability of the evidence(regardless of the hypothesis).
P(E|H) is the probability of the evidence given that hypothesis is true.
P(H|E) is the probability of the hypothesis given that the evidence is there.
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the
highest posterior probability is the outcome of prediction.
SOURCE CODE
import csv
import random
import math
def loadCsv(filename):
lines = csv.reader(open(filename, "r"))
dataset = list(lines)
for i in range(len(dataset)):
dataset[i] = [float(x) for x in dataset[i]]
return dataset
def splitDataset(dataset, splitRatio):
trainSize = int(len(dataset) * splitRatio)
trainSet = []
copy = list(dataset)
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return [trainSet, copy]
def separateByClass(dataset):
separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
def mean(numbers):
return sum(numbers)/float(len(numbers))
def stdev(numbers):
avg = mean(numbers)
variance = sum([pow(x-avg,2) for x in numbers])/float(len(numbers)-1)
return math.sqrt(variance)
def summarize(dataset):
summaries = [(mean(attribute), stdev(attribute)) for attribute in zip(*dataset)]
del summaries[-1]
return summaries
def summarizeByClass(dataset):
separated = separateByClass(dataset)
summaries = {}
for classValue, instances in separated.items():
summaries[classValue] = summarize(instances)
return summaries
def calculateProbability(x, mean, stdev):
exponent = math.exp(-(math.pow(x-mean,2)/(2*math.pow(stdev,2))))
return (1 / (math.sqrt(2*math.pi) * stdev)) * exponent
def calculateClassProbabilities(summaries, inputVector):
probabilities = {}
for classValue, classSummaries in summaries.items():
probabilities[classValue] = 1
for i in range(len(classSummaries)):
mean, stdev = classSummaries[i]
x = inputVector[i]
probabilities[classValue] *= calculateProbability(x, mean, stdev)
return probabilities
def predict(summaries, inputVector):
probabilities = calculateClassProbabilities(summaries, inputVector)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.items():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
def getPredictions(summaries, testSet):
predictions = []
for i in range(len(testSet)):
result = predict(summaries, testSet[i])
predictions.append(result)
return predictions
def getAccuracy(testSet, predictions):
correct = 0
for i in range(len(testSet)):
if testSet[i][-1] == predictions[i]:
correct += 1
return (correct/float(len(testSet))) * 100.0
def main():
filename = 'data.csv'
splitRatio = 0.67
dataset = loadCsv(filename)
trainingSet, testSet = splitDataset(dataset, splitRatio)
print('Split {0} rows into train={1} and test={2} rows'.format(len(dataset),
len(trainingSet), len(testSet)))
# prepare model
summaries = summarizeByClass(trainingSet)
# test model
predictions = getPredictions(summaries, testSet)
accuracy = getAccuracy(testSet, predictions)
print('Accuracy: {0}%'.format(accuracy))
main()
SAMPLE OUTPUT:
3. Viva 10
4. Total 25
RESULT:
Thus the program for to implement the naïve Bayesian classifier for a sample training data set stored as
a .CSV file. And to compute the accuracy of the classifier, considering few test data sets was implemented
and executed.
EX.NO:8 Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the same
data set for clustering using k-Means algorithm. Compare the results of these
DATE: two algorithms and comment on the quality of clustering. You can add
Java/Python ML library classes/API in the program
AIM:
To develop a program for Apply EM algorithm to cluster a set of data stored in a .CSV file. Use the
same data set for clustering using k-Means algorithm. Compare the results of these two algorithms and
comment on the quality of clustering. You can add Java/Python ML library classes/API in the program
EM algorithm:
These are the two basic steps of the EM algorithm, namely E Step or Expectation
Step or Estimation Step and M Step or Maximization Step.
Estimation step:
initialize µk ∑k, and πk by some random values, or by K means clustering results or by
hierarchical clustering results.
Then for those given parameter values, estimate the value of the latent variables (y k)
Maximization Step:
Update the value of the parameters( i.e. µk ∑k, and πk, and ) calculated using ML method.
1. Load data set
2. Initialize the mean µk, the covariance matrix ∑k and the mixing coefficients
1. πk by some random values. (or other values)
3. Compute the yk values for all k.
4. Again Estimate all the parameters using the current yk values.
5. Compute log-likelihood function.
6. Put some convergence criterion
7. If the log-likelihood value converges to some value ( or if all the parameters converge
to some values ) then stop, else return to Step 3.
SOURCE CODE:
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
from sklearn.mixture import GaussianMixture
import pandas as pd
X=pd.read_csv("kmeansdata.csv")
x1 = X['Distance_Feature'].values
x2 = X['Speeding_Feature'].values
X = np.array(list(zip(x1, x2))).reshape(len(x1), 2)
plt.plot()
plt.xlim([0, 100])
plt.ylim([0, 50])
plt.title('Dataset')
plt.scatter(x1, x2)
plt.show()
#code for EM
gmm = GaussianMixture(n_components=3)
gmm.fit(X)
em_predictions = gmm.predict(X)
print("\nEM predictions")
print(em_predictions)
print("mean:\n",gmm.means_)
print('\n')
print("Covariances\n",gmm.covariances_)
print(X)
plt.title('Exceptation Maximum')
plt.scatter(X[:,0], X[:,1],c=em_predictions,s=50)
plt.show()
#code for Kmeans
import matplotlib.pyplot as plt1
kmeans = KMeans(n_clusters=3)
kmeans.fit(X)
print(kmeans.cluster_centers_)
print(kmeans.labels_)
plt.title('KMEANS')
plt1.scatter(X[:,0], X[:,1], c=kmeans.labels_, cmap='rainbow')
plt1.scatter(kmeans.cluster_centers_[:,0] ,kmeans.cluster_centers_[:,1], color='black')
OUTPUT
EM predictions
[0 0 0 1 0 1 1 1 2 1 2 2 1 1 2 1 2 1 0 1 0 1 1]
mean:
[[57.70629058 25.73574491]
[52.12044022 22.46250453]
[46.4364858 39.43288647]]
Covariances
[[[83.51878796 14.926902 ]
[14.926902 2.70846907]]
[[29.95910352 15.83416554]
P a g e | 38 ARTIFICIAL INTELLIGENCE AND EXPERT SYSTEMS LABORATORY DEPARTMENT OF CSE
SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE
[15.83416554 67.01175729]]
[[79.34811849 29.55835938]
[29.55835938 18.17157304]]]
[[71.24 28. ]
[52.53 25. ]
[64.54 27. ]
[55.69 22. ]
[54.58 25. ]
[41.91 10. ]
[58.64 20. ]
[52.02 8. ]
[31.25 34. ]
[44.31 19. ]
[49.35 40. ]
[58.07 45. ]
[44.22 22. ]
[55.73 19. ]
[46.63 43. ]
[52.97 32. ]
[46.25 35. ]
[51.55 27. ]
[57.05 26. ]
[58.45 30. ]
[43.42 23. ]
[55.68 37. ]
[55.15 18. ]
3. Viva 10
4. Total 25
RESULT:
Thus the program for EM and K Means was implemented and executed.
AIM:
To Develop a Program to implement k-Nearest Neighbour algorithm to classify the iris data set.
Print both correct and wrong predictions. Java/Python ML library classes can be used for this problem.
K-Nearest-Neighbour Algorithm:
Confusion matrix:
Note,
• Class 1 : Positive
• Class 2 : Negative
• Positive (P) : Observation is positive (for example: is an apple).
• Negative (N) : Observation is not positive (for example: is not an apple).
• True Positive (TP) : Observation is positive, and is predicted to be positive.
• False Negative (FN) : Observation is positive, but is predicted negative. (Also known as a
"Type II error.")
• True Negative (TN) : Observation is negative, and is predicted to be negative.
• False Positive (FP) : Observation is negative, but is predicted positive. (Also known as a
"Type I error.")
SOURCE CODE:
classifier=KNeighborsClassifier(n_neighbors=8,p=3,metric='euclidean')
classifier.fit(X_train,y_train)
cm=confusion_matrix(y_test,y_pred)
print('Confusion matrix is as follows\n',cm)
print('Accuracy Metrics')
print(classification_report(y_test,y_pred))
print(" correct predicition",accuracy_score(y_test,y_pred))
print(" worng predicition",(1-accuracy_score(y_test,y_pred)))
SAMPLE OUTPUT:
3. Viva 10
4. Total 25
RESULT:
Thus the program for k-Nearest Neighbour algorithm was implemented and executed
AIM:
To develop a program for the non-parametric Locally Weighted Regression algorithm in order to fit
data points. Select appropriate data set for your experiment and draw graphs.
SOURCE CODE:
import numpy as np
from bokeh.plotting import figure, show, output_notebook
from bokeh.layouts import gridplot
from bokeh.io import push_notebook
def local_regression(x0, X, Y, tau):
# add bias term
x0 = np.r_[1, x0] # Add one to avoid the loss in information
X = np.c_[np.ones(len(X)), X]
# fit model: normal equations with kernel
xw = X.T * radial_kernel(x0, X, tau) # XTranspose * W
beta = np.linalg.pinv(xw @ X) @ xw @ Y # @ Matrix Multiplication or Dot Product
# predict value
return x0 @ beta # @ Matrix Multiplication or Dot Product for prediction
def radial_kernel(x0, X, tau):
return np.exp(np.sum((X - x0) ** 2, axis=1) / (-2 * tau * tau))
# Weight or Radial Kernal Bias Function
n = 1000
# generate dataset
X = np.linspace(-3, 3, num=n)
print("The Data Set ( 10 Samples) X :\n",X[1:10])
Y = np.log(np.abs(X ** 2 - 1) + .5)
print("The Fitting Curve Data Set (10 Samples) Y :\n",Y[1:10])
# jitter X
X += np.random.normal(scale=.1, size=n)
print("Normalised (10 Samples) X :\n",X[1:10])
domain = np.linspace(-3, 3, num=300)
print(" Xo Domain Space(10 Samples) :\n",domain[1:10])
def plot_lwr(tau):
# prediction through regression
prediction = [local_regression(x0, X, Y, tau) for x0 in domain]
plot = figure(plot_width=400, plot_height=400)
plot.title.text='tau=%g' % tau
plot.scatter(X, Y, alpha=.3)
3. Viva 10
4. Total 25
RESULT: