KEMBAR78
CHAP 7 Features Recognition and Classification | PDF | Computer Vision | Digital Signal Processing
0% found this document useful (0 votes)
39 views94 pages

CHAP 7 Features Recognition and Classification

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views94 pages

CHAP 7 Features Recognition and Classification

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

MEDICAL IMAGE PROCESSING

Bich.Le
School of Biomedical Engineering,
International University
Feature recognition
and classification
Chapter 7

Bich.Le
Chapter learning outcomes
By the end of this subject student should be able to:

1. Present the definition of image feature recognition and classification

2. Apply the image feature recognition and classification algorithms

3. Write Python code for some basic image feature recognition and
classification applications
Pre-Class discussion

Why do we need feature recognition and classification?


Feature
• A feature usually refers to a region of a part
with some interesting geometric or
topological properties.
Feature Recognition
Library

Recognition
i/p
samples Feature Comparator
Detector
Template matching and
Cross co-relation
Simple Template Matching
Template matching

Main Component Of Feature Detection And Matching


•Detection: Identify the Interest Point
•Description: The local appearance around each feature
point is described in some way that is (ideally) invariant
under changes in illumination, translation, scale, and in-
plane rotation. We typically end up with a descriptor vector
for each feature point.
•Matching: Descriptors are compared across the images, to
identify similar features. For two images we may get a set of
pairs (Xi, Yi) ↔ (Xi`, Yi`), where (Xi, Yi) is a feature in one
image and (Xi`, Yi`) its matching feature in the other image.
Template matching
Interest Point
Interest point or Feature Point is the point which is expressive in
texture. Interest point is the point at which the direction of the
boundary of the object changes abruptly or intersection point
between two or more edge segments.
Template matching

Properties Of Interest Point


•It has a well-defined position in image space or well
localized.
•It is stable under local and global perturbations in the
image domain as illumination/brightness variations, such
that the interest points can be reliably computed with a high
degree of repeatability.
•Should provide efficient detection.
Possible Approaches
•Based on the brightness of an image (Usually by image
derivative).
•Based on Boundary extraction (Usually by Edge detection
and Curvature analysis).
Template matching

Feature Descriptor
A feature descriptor is an algorithm which takes an image
and outputs feature descriptors/feature vectors. Feature
descriptors encode interesting information into a series of
numbers and act as a sort of numerical “fingerprint” that can
be used to differentiate one feature from another.
Template matching

Algorithms for Identification


•Harris Corner
•SIFT(Scale Invariant Feature Transform)
•SURF(Speeded Up Robust Feature)
•FAST(Features from Accelerated Segment Test)
•ORB(Oriented FAST and Rotated BRIEF)
Template matching

•Harris Corner
Template matching

•Harris Corner
Template matching

•Harris Corner
import numpy as np
import cv2

#%matplotlib inline

# Read in the image


image = cv2.imread('shoe.jpg')

# Make a copy of the image


image_copy = np.copy(image)

# Change color to RGB (from BGR)


image_copy = cv2.cvtColor(image_copy,
cv2.COLOR_BGR2RGB)

cv2.imshow('image_copy', image_copy)
Template matching

•Harris Corner

# Convert to grayscale
gray = cv2.cvtColor(image_copy,
cv2.COLOR_RGB2GRAY)
gray = np.float32(gray)

# Detect corners
dst = cv2.cornerHarris(gray, 2, 3, 0.04)

# Dilate corner image to enhance corner points


dst = cv2.dilate(dst,None)

cv2.imshow('dst', dst)
Template matching
•Harris Corner
# This value vary depending on the image and how many corners you
want to detect
# Try changing this free parameter, 0.1, to be larger or smaller and see
what happens
thresh = 0.1*dst.max()

# Create an image copy to draw corners on


corner_image = np.copy(image_copy)

# Iterate through all the corners and draw them on the image (if they
pass the threshold)
for j in range(0, dst.shape[0]):
for i in range(0, dst.shape[1]):
if(dst[j,i] > thresh):
# image, center pt, radius, color, thickness
cv2.circle( corner_image, (i, j), 1, (0,255,0), 1)
cv2.imshow('corner image', corner_image)
cv2.waitKey(0)
Template matching

•SIFT(Scale Invariant Feature Transform)

Harris corner is rotation-invariant, which means, even if the image is


rotated, we can find the same corners. It is obvious because corners
remain corners in rotated image also. But what about scaling? A corner
may not be a corner if the image is scaled. For example, check a simple
image below. A corner in a small image within a small window is flat when
it is zoomed in the same window. So Harris corner is not scale invariant.
Template matching

•SIFT(Scale Invariant Feature Transform)

The Scale-Invariant Feature Transform (SIFT) algorithm is extensively


employed in computer vision applications for the purpose of detecting
and describing key points that remain invariant under scale, rotation,
and affine transformations. The method employs a scale-space pyramid
and conducts blob detection across various scales. Local gradient
orientations are taken into account when constructing descriptors for
each key point. The following are the fundamental principles underlying
the Scale-Invariant Feature Transform (SIFT) algorithm:
Template matching
•SIFT(Scale Invariant Feature Transform)
Lowe approximated Laplacian of Gaussian with Difference of
Gaussian for finding scale-space

The ideal number of octaves should be four, and for each octave, the
number of blur images should be five.
Template matching

•SIFT(Scale Invariant Feature Transform)

Keypoint Localization
Once the images have been created, the next step is to find the important
keypoints from the image that can be used for feature matching. The idea is
to find the local maxima and minima for the images. This part is divided
into two steps:
1.Find the local maxima and minima
2.Remove low contrast keypoints (keypoint selection)
Template matching

•SIFT(Scale Invariant Feature Transform)

The pixel marked x is compared with the


neighboring pixels (in green) and is
selected as a keypoint or interest point if it
is the highest or lowest among the
neighbors:

we will eliminate the keypoints that have low contrast or lie


very close to the edge.
Template matching

•SIFT(Scale Invariant Feature Transform)

Orientation Assignment
Now an orientation is assigned to each keypoint to achieve
invariance to image rotation. A neighbourhood is taken around the
keypoint location depending on the scale, and the gradient
magnitude and direction is calculated in that region. An orientation
histogram with 36 bins covering 360 degrees is created (It is
weighted by gradient magnitude and gaussian-weighted circular
window with σ equal to 1.5 times the scale of keypoint). The highest
peak in the histogram is taken and any peak above 80% of it is also
considered to calculate the orientation. It creates keypoints with
same location and scale, but different directions. It contribute to
stability of matching.
Template matching

•SIFT(Scale Invariant Feature Transform)

Orientation Assignment
Template matching

SIFT in OpenCV

import numpy as np
import cv2 as cv
img = cv.imread('home.jpg')
gray= cv.cvtColor(img,cv.COLOR_BGR2GRAY)
sift = cv.SIFT_create()
kp = sift.detect(gray,None)
img=cv.drawKeypoints(gray,kp,img)
cv.imwrite('sift_keypoints.jpg',img)

sift.detect() function finds the keypoint in the images. You can pass a mask if
you want to search only a part of image. Each keypoint is a special structure
which has many attributes like its (x,y) coordinates, size of the meaningful
neighbourhood, angle which specifies its orientation, response that specifies
strength of keypoints etc.
Template matching

SIFT in OpenCV

OpenCV also provides cv.drawKeyPoints() function which draws the small circles on
the locations of keypoints. If you pass a
flag, cv.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS to it, it will draw a circle
with size of keypoint and it will even show its orientation. See below example.

img=cv.drawKeypoints(gray,kp,img,flags=cv.DRAW_MATCHES_
FLAGS_DRAW_RICH_KEYPOINTS)
cv.imwrite('sift_keypoints.jpg',img)
Template matching
SIFT in OpenCV

Now to calculate the descriptor, OpenCV provides two methods.


1.Since you already found keypoints, you can call sift.compute() which computes
the descriptors from the keypoints we have found. Eg: kp,des =
sift.compute(gray,kp)
2.If you didn't find keypoints, directly find keypoints and descriptors in a single step
with the function, sift.detectAndCompute().
Template matching
•SURF(Speeded Up Robust Feature)

The SURF (Speeded Up Robust Features) algorithm, introduced by Herbert


Bay et al. in 2006, was designed to address certain drawbacks of existing
feature detection techniques such as SIFT (Scale Invariant Feature
Transform), while still ensuring computational efficiency. The SURF
algorithm has been developed with the purpose of enhancing the speed of
feature detection and matching procedures. The utilization of integral images
for the computation of box filters enhances computational efficiency. The
SURF algorithm primarily aims to detect points that exhibit significant
variations in intensity across multiple directions, thereby ensuring its ability
to handle variations in both scale and orientation.
Template matching
•SURF(Speeded Up Robust Feature)

approximates LoG with Box Filter

Can be computed very fast using integral images!

SURF rely on determinant of Hessian matrix for both scale and location.
Template matching
•SURF(Speeded Up Robust Feature)

For orientation assignment, SURF uses wavelet responses in horizontal


and vertical direction for a neighbourhood of size 6s. Adequate gaussian
weights are also applied to it.

The dominant orientation is estimated by calculating the sum of all


responses within a sliding orientation window of angle 60 degrees.
Template matching
•SURF(Speeded Up Robust Feature)

SURF adds a lot of features to improve the speed in every step. Analysis
shows it is 3 times faster than SIFT while performance is comparable to SIFT.
SURF is good at handling images with blurring and rotation, but not good at
handling viewpoint change and illumination change.
Template matching
•SURF(Speeded Up Robust Feature)

img = cv.imread('fly.png', cv.IMREAD_GRAYSCALE)


# Create SURF object. You can specify params here or later.
# Here I set Hessian Threshold to 400
surf = cv.xfeatures2d.SURF_create(400)
# Find keypoints and descriptors directly
kp, des = surf.detectAndCompute(img,None)
len(kp)

keypoints is too much to show in a picture. We reduce it to some 50 to draw


it on an image. While matching, we may need all those features, but not
now. So we increase the Hessian Threshold.
Template matching
•SURF(Speeded Up Robust Feature)

# Check present Hessian threshold


print( surf.getHessianThreshold() )

# We set it to some 50000. Remember, it is just for representing


in picture.
# In actual cases, it is better to have a value 300-500
surf.setHessianThreshold(50000)
# Again compute keypoints and check its number.
kp, des = surf.detectAndCompute(img,None)
print( len(kp) )

It is less than 50. Let's draw it on the image.


img2 = cv.drawKeypoints(img,kp,None,(255,0,0),4)
plt.imshow(img2)
plt.show()
Template matching
•SURF(Speeded Up Robust Feature)
Template matching
•BRIEF (Binary Robust Independent Elementary Features)

The Binary Robust Independent Elementary


Features (BRIEF) algorithm is utilized to extract
feature descriptors from images, capturing
localized image information and encoding it in a
concise binary format. The BRIEF algorithm was
specifically developed with computational
efficiency in mind, rendering it well-suited for
real-time implementations in the fields of
computer vision and image processing.

Limitations:
The simplicity of BRIEF is accompanied by certain drawbacks. In situations
characterized by substantial variations in scale, rotation, or lighting, it is possible
that its performance may be inferior compared to more advanced descriptors
such as SIFT or SURF.
Template matching
•BRIEF (Binary Robust Independent Elementary Features)
import numpy as np
import cv2
import matplotlib.pyplot as plt
# Load the image
img = cv2.imread('./images/irene.jpg')
img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
img2 = None# Create a BRIEF object
# Start STAR detector first
star = cv2.xfeatures2d.StarDetector_create()
# BRIEF extractor
brief = cv2.xfeatures2d.BriefDescriptorExtractor_create()
# Detect keypoints with STAR and compute descriptors with
BRIEF
kp1 = star.detect(img,None)
kp2,des = brief.compute(img,kp1)
img2 = cv2.drawKeypoints(img,kp1,img2,(255,0,0))
#Display results
cv2.imshow('Result',img2)
cv2.waitKey()
cv2.destroyAllWindows()
Template matching
Oriented FAST and Rotated BRIEF (ORB)

The Oriented FAST and Rotated BRIEF (ORB) algorithm is a feature


detection and description technique that has been specifically developed
to offer a rapid and effective alternative to existing algorithms such as
SIFT and SURF. The ORB (Oriented FAST and Rotated BRIEF) algorithm
was introduced by Rublee et al. in 2011. It integrates the advantages of
FAST keypoint detection and BRIEF descriptor computation, while also
incorporating orientation information to enhance its accuracy.
Template matching
Oriented FAST and Rotated BRIEF (ORB)
Template matching
Oriented FAST and Rotated BRIEF (ORB)

Advantages:

 The ORB algorithm is specifically developed for real-time applications,


thereby exhibiting superior speed compared to algorithms such as SIFT and
SURF.
 The ORB algorithm offers descriptors that are invariant to rotation,
enabling precise matching regardless of the orientation variations.

Limitations:

 The ORB feature descriptor may exhibit reduced robustness compared to


the SIFT or SURF descriptors when faced with challenging lighting
conditions or significant viewpoint changes.
Template matching
Oriented FAST and Rotated BRIEF (ORB)
Example Python code for ORB algorythm
import cv2

# Load the image

image_path = 'image.jpg'

image = cv2.imread(image_path)

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Create an ORB object

orb = cv2.ORB_create()

# Detect keypoints and compute descriptors

keypoints, descriptors = orb.detectAndCompute(gray, None)

# Draw keypoints on the image

image_with_keypoints = cv2.drawKeypoints(image, keypoints, None, (0, 255, 0), 4)

# Display the image with keypoints

cv2.imshow('ORB Keypoints', image_with_keypoints)

cv2.waitKey(0)

cv2.destroyAllWindows()
Template matching

Descriptors can be categorized into two classes:


•Local Descriptor: It is a compact representation of a point’s local neighborhood.
Local descriptors try to resemble shape and appearance only in a local
neighborhood around a point and thus are very suitable for representing it in terms
of matching.
•Global Descriptor: A global descriptor describes the whole image. They are
generally not very robust as a change in part of the image may cause it to fail as it
will affect the resulting descriptor.
Algorithms
•SIFT(Scale Invariant Feature Transform)
•SURF(Speeded Up Robust Feature)
•BRISK (Binary Robust Invariant Scalable Keypoints)
•BRIEF (Binary Robust Independent Elementary Features)
•ORB(Oriented FAST and Rotated BRIEF)
Template matching

Algorithm For Feature


Detection And Matching
•Find a set of distinctive
keypoints
•Define a region around each
keypoint
•Extract and normalize the
region content
•Compute a local descriptor
from the normalized region
•Match local descriptors

Algorithms
•Brute-Force Matcher
•FLANN(Fast Library for Approximate Nearest Neighbors)
Matcher
Template matching
Brute-Force Matcher
Brute-Force matcher is simple. It takes the descriptor of one feature in
first set and is matched with all other features in second set using some
distance calculation. And the closest one is returned.
Template matching
Brute-Force Matching with SIFT Descriptors and Ratio Test
Template matching
Brute-Force Matching with SIFT Descriptors and Ratio Test
import numpy as np
import cv2 as cv

img1 = cv.imread('theincredible.jpg',cv.IMREAD_GRAYSCALE)
#queryImage
img2 = cv.imread('theincredible2.jpg',cv.IMREAD_GRAYSCALE) #
trainImage
# Initiate SIFT detector
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)
# Apply ratio test
good = []
for m,n in matches:
if m.distance < 0.75*n.distance:
good.append([m])
# cv.drawMatchesKnn expects list of lists as matches.
img3 =
cv.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=cv.DrawMat
chesFlags_NOT_DRAW_SINGLE_POINTS)

cv2.imshow('image3', img3)
cv2.waitKey(0)
Template matching
Brute-Force Matching with FLANN based Matcher
import numpy as np
Template matching
import cv2 as cv Brute-Force Matching with FLANN based Matcher
img1 = cv.imread('theincredible.jpg',cv.IMREAD_GRAYSCALE) #
queryImage
img2 = cv.imread('theincredible2.jpg',cv.IMREAD_GRAYSCALE) #
trainImage
# Initiate SIFT detector
sift = cv.SIFT_create()
# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1,None)
kp2, des2 = sift.detectAndCompute(img2,None)
# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks=50) # or pass empty dictionary
flann = cv.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,des2,k=2)
# Need to draw only good matches, so create a mask
matchesMask = [[0,0] for i in range(len(matches))]
# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
if m.distance < 0.7*n.distance:
matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),
singlePointColor = (255,0,0),
matchesMask = matchesMask,
flags = cv.DrawMatchesFlags_DEFAULT)
img3 =
cv.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_para
ms)

cv.imshow('image3', img3)
cv.waitKey(0)
Example and detail steps for image classification using
template matching

Template images
Example and detail steps for image classification using
template matching

Detail steps include:

Load Template and Input Images:


o Load the template images, which contains the visual pattern you want to match
against.
o Load the input image that you want to classify and determine if it contains the
pattern.
Convert Images to Grayscale:
o Convert both the input image and the template image to grayscale, reducing
them to a single channel. Images in grayscale facilitate comparisons.
Define the interest point/keypoints and descriptors using the aforementioned
algorithms such as SIFT, SURF, BRIEF, ORB,…
Example and detail steps for image classification using
template matching

Calculate Matching Score:


Apply a Threshold:
o Set a similarity threshold to determine if the input image contains similar
key points and descriptors to the template image. If the similarity score
exceeds the threshold, the input image is classified as belonging to the
template image; otherwise, the image is classified as not belonging to the
template image.
Visualize Classification Result:
Repeat for Different Classes:
To perform multi-class image classification, repeat the process for each
template if you have multiple templates representing distinct classes.
Example and detail steps for image classification using
template matching

Python code for template classification

import cv2 as cv

import numpy as np

import os

path = r"F:\feature_detect\Queery"

orb = cv.ORB_create(nfeatures=1000)

####Import images

images = []

classNames = []

myList = os.listdir(path)

print(myList)
Example and detail steps for image classification using
template matching
Python code for template classification

for cl in myList:

imgCur = cv.imread(f'{path}/{cl}',0)

images.append(imgCur)

classNames.append(os.path.splitext(cl)[0])

print(classNames)

###Search image in the lirary

def findDes(images):

desList = []

for img in images:

kp,des = orb.detectAndCompute(img,None) #Using ORB algorithm to


define the interest points, descriptors and estimate the matching.

desList.append(des)

return desList
Example and detail steps for image classification using
template matching
Python code for template classification

####Search the ID and return the ID value

def findID(img, desList, thres=15): ##The default threshold value is 15 matching


point, user can revise accordingly.

kp2, des2 = orb.detectAndCompute(img, None)

bf = cv.BFMatcher()

matchList = []

finalVal = -1 #Do giá trị đầu tiên của list=0

try:

for des in desList:

matches = bf.knnMatch(des,des2,k=2)

good = []

for m,n in matches:

if m.distance < 0.75*n.distance:

good.append([m])

matchList.append(len(good))

#print(matchList)
Example and detail steps for image classification using
template matching
Python code for template classification

except:

pass

if len(matchList)!=0:

if max(matchList) > thres:

finalVal = matchList.index(max(matchList))

return finalVal

desList = findDes(images)

print(len(desList))

#read image from webcam

cap = cv.VideoCapture(1)

while True:

success, img2 = cap.read()

imgOriginal = img2.copy()

img2 = cv.cvtColor(img2, cv.COLOR_BGR2GRAY)


Example and detail steps for image classification using
template matching
Python code for template classification

id = findID(img2, desList) #Call the find ID function to search for matching


image in the library and return the ID

if id != -1:

cv.putText(imgOriginal, classNames[id], (50,50),


cv.FONT_HERSHEY_COMPLEX, 1, (0,0,255),2) ##Write the matching ID to the image frame.

cv.imshow("img2", imgOriginal)

cv.waitKey(1)
Example and detail steps for image classification using
template matching
Matching application in
feature detection
#pip install scikit-image
import numpy as np
ax1.imshow(coin, cmap=plt.cm.gray)
import matplotlib.pyplot as plt
ax1.set_axis_off()
import cv2 as cv
ax1.set_title('template')
from skimage import data
from skimage.feature import match_template
ax2.imshow(image, cmap=plt.cm.gray)
ax2.set_axis_off()
# Read image
ax2.set_title('image')
#image = data.coins()
# highlight matched region
image = cv.imread('chest_xray.jpeg',0)
hcoin, wcoin = coin.shape
# take the reference object
rect = plt.Rectangle((x, y), wcoin, hcoin, edgecolor='r', facecolor='none')
coin = image[170:220, 75:130]
ax2.add_patch(rect)
# Perform template matching
ax3.imshow(result)
result = match_template(image, coin)
ax3.set_axis_off()
ij = np.unravel_index(np.argmax(result), result.shape)
ax3.set_title('`match_template`\nresult')
x, y = ij[::-1]
# highlight matched region
ax3.autoscale(False)
# Show the results
ax3.plot(x, y, 'o', markeredgecolor='r', markerfacecolor='none',
fig = plt.figure(figsize=(8, 3))
markersize=10)
ax1 = plt.subplot(1, 3, 1)
ax2 = plt.subplot(1, 3, 2)
plt.show()
ax3 = plt.subplot(1, 3, 3, sharex=ax2, sharey=ax2)
Parametric Description
Successful Feature recognition applications:
• Face Recognition
• Fingerprint Recognition

They uses feature specific measurement parameters KA


Parametric Description Method
(a) (b)

Figure 12.6 A human f ace labeled witb several of theprincipal oerttcal and horizontal
dimensions used in facial identification.
Uses different
transformation parameters
Classification
•Classification: A classification problem is when the output
variable is a category, such as “red” or “blue” or “disease” and
“no disease”.

• Imposed Criteria (the expert system)

• Supervised Classification(KNN)

• Unsupervised Classification(cluster analysis)


Decision points
•Histogram parameter value overlap
•Need for decision threshold with acceptable error percentage
Multidimensional classification
• Histograms and Probability Distribution
Functions are plotted as function of single
parameter.
• If plotted as function of different parameters
classification would be easier.
Learning
Learning Systems
Learning Systems
Overview of Supervised Learning Algorithm

In Supervised learning, an AI system is presented with data which is


labeled, which means that each data tagged with the correct label.
The goal is to approximate the mapping function so well that when you
have new input data (x) that you can predict the output variables (Y) for
that data.
Learning Systems
Overview of Supervised Learning Algorithm
Types of Supervised learning
•Classification: A classification problem is when the
output variable is a category, such as “red” or “blue”
or “disease” and “no disease”.
•Regression: A regression problem is when the
output variable is a real value, such as “dollars” or
“weight”.
Learning Systems
Overview of Unsupervised Learning Algorithm
In unsupervised learning, an AI system is presented with unlabeled,
uncategorized data and the system’s algorithms act on the data without
prior training. The output is dependent upon the coded algorithms.
Subjecting a system to unsupervised learning is one way of testing AI.
Learning Systems
Overview of Unsupervised Learning Algorithm
Types of Unsupervised learning
•Clustering: A clustering problem is where you want
to discover the inherent groupings in the data, such
as grouping customers by purchasing behavior.
•Association: An association rule learning problem
is where you want to discover rules that describe
large portions of your data, such as people that buy
X also tend to buy Y.
Learning Systems
Overview of Reinforcement Learning
A reinforcement learning algorithm, or agent, learns by interacting with its
environment. The agent receives rewards by performing correctly and
penalties for performing incorrectly. The agent learns without intervention
from a human by maximizing its reward and minimizing its penalty. It is a type
of dynamic programming that trains algorithms using a system of reward and
punishment.
K Nearest Neighbor
• Non parametric method
• Contrary to histogram or LDA method it saves
actual n dimensional coordinates for each of
the identified feature
• Larger storage is required
• Processing Power Requirement increases
Class A and B are previously identified features

So it is supervised classification
Special Case:
When k=1, each training vector defines a region in space, defining a Voronoi
partition of the space
Clustering
• Cluster analysis or clustering is the task of grouping a
set of objects in such a way that objects in the same
group (called a cluster) are more similar (in some
sense or another) to each other than to those in
other groups (clusters)
Clustering

Hierarchical Clustering
Clustering

K-means Clustering

K-means separates data


into Voronoi-cells, which
assumes equal-sized
clusters
Expert System/Decision tree
Expert System/Decision tree

•Rules are supplied


by human expert.
•Order of execution
of rules determined
by system software
Expert System/Decision tree

• Simple classification systems like this are sometimes called


decision trees or production rules, consisting of an ordered set of
IF…THEN relationships(rules)

• Our previous example was Binary Decision Tree.

• Most real expert systems have far more rules than this one and
the order in which they are to be applied is not necessarily
obvious
•It is feed forward structure.
•This approach does not test all
possible paths from observations to
conclusions.
•Heuristics to control the order in
which possible paths are tested are
very important
Review
1. Why do we need feature recognition?
2. What is feature?
3. What make the object features?
4. How to recognize the object feature?
5. Present the template matching procedure.
6. What is classification?
7. What are the applications of classification in biomedical field?
8. What is decision point?
9. List the methods for classification.
10. What is machine learning and how machine learning is applied in
classification?
11. Present the learning system.
12. Write python code for feature recognition and image classification.
Python library for image processing
Scikit-image:
https://scikit-image.org/docs/dev/api/skimage.restoration.html

OpenCV:
Morphological operation: http://datahacker.rs/006-morphological-
transformations-with-opencv-in-python/
https://docs.opencv.org/4.5.2/d2/d96/tutorial_py_table_of_contents_imgpro
c.html
https://analyticsindiamag.com/image-processing-with-opencv-in-python/
https://likegeeks.com/python-image-processing/
https://stackabuse.com/introduction-to-image-processing-in-python-with-
opencv
Python image processing with OpenCV - Tutorial

https://www.youtube.com/watch?v=WQeoO7MI0Bs
Image databases
www.aylward.org/notes/open-access-medical-image-repositories

brain-development.org/ixi-dataset/
Points of Reflection on Today’s Class
Please briefly describe your insights on the following points from today’s class.

•Point of Interest: Describe what you found most interesting in today’s class.
How Interesting? (circle) Little Bit 1 2 3 4 5 Very Much

•Muddiest Point: Describe what was confusing or needed more detail.


How Muddy? (circle) Little Bit 1 2 3 4 5 Very Much

•Learning Point: Describe what you learned about how you learn?

Letter + 4 digit number ______________ F M


Class Topic: _______________________Date: ________________

93
Bich.Le

You might also like