KEMBAR78
Anmol Term Paper (Image Recognition Using Machine Learning) | PDF | Principal Component Analysis | Support Vector Machine
0% found this document useful (0 votes)
5 views23 pages

Anmol Term Paper (Image Recognition Using Machine Learning)

The term paper by Anmol Bohare focuses on weather prediction using machine learning techniques, particularly in the context of face recognition systems. It explores various classification methods including Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and K-Nearest Neighbor (KNN), while proposing a feature reduction technique using Principal Component Analysis (PCA) to enhance performance. The findings demonstrate that KNN with PCA achieves a balance between real-time processing and accuracy, making it suitable for consumer applications requiring efficient face recognition.

Uploaded by

Anmol Bohare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views23 pages

Anmol Term Paper (Image Recognition Using Machine Learning)

The term paper by Anmol Bohare focuses on weather prediction using machine learning techniques, particularly in the context of face recognition systems. It explores various classification methods including Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and K-Nearest Neighbor (KNN), while proposing a feature reduction technique using Principal Component Analysis (PCA) to enhance performance. The findings demonstrate that KNN with PCA achieves a balance between real-time processing and accuracy, making it suitable for consumer applications requiring efficient face recognition.

Uploaded by

Anmol Bohare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Term Paper

on

WEATHER PREDICTION USING MACHINE LEARNING

submitted in partial fulfillment of the requirements


for the award of the degree
of

Bachelor of Technology
in

Computer Science Engineering

By

ANMOL BOHARE
Enrollment No. A60205222318

Under the guidance of

Mr. VINAY KUMAR SINGH


Associate Professor

Department of Computer Science Engineering


Amity School of Engineering & Technology
Amity University Madhya Pradesh, Gwalior
October ,2023

Department of Computer Science Engineering


Amity School of Engineering and Technology
Amity University Madhya Pradesh, Gwalior

DECLARATION

I, ANMOL BOHARE student of Bachelor of Technology in Computer Science


Engineering hereby declare that the Term Paper entitled “Image Recognition Using
Machine Learning” which is submitted by me to Department of Computer Science
Engineering, Amity School of Engineering & Technology, Amity University Madhya
Pradesh, in partial fulfillment of the requirement for the award of the degree of
Bachelor of Technology in Computer Science Engineering, has not been previously
formed the basis for the award of any degree, diploma or other similar title or
recognition.

ANMOL BOHARE
Date: 27/10/2023 (Enrollment No. A602052223318)

ii
Department of Computer Science Engineering
Amity School of Engineering and Technology
Amity University Madhya Pradesh, Gwalior

CERTIFICATE

This is to certify that Anmol Bohare (Enrollment N0. A60205222318), student of


B.Tech(CSE) III semester, Department of Computer Science Engineering, ASET, Amity
University Madhya Pradesh, has written her Term Paper entitled “Image Recognition
Using Machine Learning” under my guidance and supervision.

The work was satisfactory. he has shown complete dedication and devotion to the given
work.

Date: 27/10/2023

(VINAY KUMAR SINGH) (Dr. Vikash Thada)


Associate Professor Head of the Department
Supervisor

iii
iv
ACKNOWLEDGEMENT

I am very much thankful to our honorable Vice Chancellor Lt Gen. V. K. Sharma AVSM
(Retd) for allowing me to write term paper. I would also like to thanks Prof. (Dr.) M. P.
Kaushik, Pro-Vice Chancellor, Amity University Madhya Pradesh for his support.

I extend my sincere thanks to Maj. Gen. (Dr.) S. C. Jain, VSM** (Retd), HOI, Amity
School of Engineering and Technology, Amity University Madhya Pradesh, Gwalior for his
guidance and support in writing my term paper. I would also like to thank Prof. (Dr.) Vikash
Thada Head of Department (CSE), for his kind concern throughout the term paper.

I am very much grateful to Dr. Vinay Kumar Singh, Associate Professor, Department of
Computer Science Engineering, Amity School of Engineering and Technology, Amity
University Madhya Pradesh my supervisor for his constant guidance and encouragement
provided in this endeavor.

I am also thankful to the whole staff of ASET, AUMP for teaching and helping me always.
Last but not the least I would like to thank my parents and friends for their constant support.

Anmol Bohare
Enroll No.-A60205222318

v
ABSTRACT
Face recognition systems play a crucial role in various consumer applications, demanding a
balance between real-time processing and high accuracy. In this paper, we address the
challenge of achieving near real-time results with improved accuracy by exploring different
classification techniques: Support Vector Machine (SVM), Linear Discriminant Analysis (LDA),
and K Nearest Neighbor (KNN).

While SVM proves to be effective, we observe that KNN, although equally potent, faces
limitations in real-world applications due to high response times when dealing with high-
dimensional data. To mitigate this challenge, we propose a feature reduction technique
using Principal Component Analysis (PCA). This technique aims to streamline the face
recognition process, making it more suitable for real-time applications without
compromising accuracy.

Our approach involves applying KNN after reducing the number of features using PCA. We
conduct extensive tests and comparisons with various classification approaches, including
SVM, KNN, KNN with PCA, LDA, and LDA with PCA, utilizing a benchmark dataset. Through
our experiments, we demonstrate the effectiveness of KNN with PCA in achieving the
desired balance between near real-time performance and enhanced accuracy, surpassing
the performance of SVM and LDA.

The experimental results showcase the potential of our proposed approach in meeting the
stringent requirements of consumer applications where quick and accurate face recognition
is paramount. Additionally, we discuss the implications of our findings and potential areas
for future research, emphasizing the significance of optimizing face recognition systems for
real-world, high-dimensional datasets. Our work contributes to the ongoing efforts to
enhance the practical applicability of face recognition technology in diverse consumer-
oriented domains.

vi
LIST OF ABBREVIATIONS

S. No. Terms Expanded Form


1 SVM: Support SVM: Support Vector Machine
Vector Machine
2 LDA: Linear LDA: Linear Discriminant Analysis
Discriminant
Analysis
3 KNN: K-Nearest KNN: K-Nearest Neighbor
Neighbor

CONTENTS

Table of Contents

1. Abstract

1.1 Background

1.2 Objectives

1.3 Methodology

1.4 Results

1.5 Conclusion

2. Introduction

2.1 Motivation

2.2 Challenges in Face Recognition

2.3 Scope of the Paper

vii
3. Related Work

3.1 Face Detection Techniques

3.2 Face Recognition Approaches

3.3 Subspace Representation in Face Recognition

4. Face Recognition System

4.1 Feature Extraction

4.2 Classification Techniques

4.2.1 Linear Discriminant Analysis (LDA)

4.2.2 K-nearest Neighbor (KNN)

4.2.3 Support Vector Machine (SVM)

4.2.4 Feature Reduction: Principal Component Analysis (PCA)

5. Experimental Results

5.1 Dataset Used

5.2 Accuracy Analysis

5.3 Response Time Evaluation

5.4 Impact of K-value in KNN

5.5 Comparison of Different Classifiers

6. Conclusion and Future Work

6.1 Summary of Findings

6.2 Contributions

6.3 Limitations and Challenges

6.4 Future Directions

7. References

viii
1.Introduction
With the advancement of digital technology, we are required to consider, what, is
the future of computing in our daily activities and how it will affect our lives. We
would like to develop a face recognition system that can be embedded in a home
environment to facilitate intelligent services. By automatic identification of home
users, personalized services can be offered. For example, a face recognition-based
smart TV program can offer a set of programs that are customized to that
recognized user. After recognizing face of a user, corresponding user profile will be
identified and matched with TV programs and presented to the user.

Current state of the art considers many face processing techniques, however, face
recognition in an unconstrained environment, such as a home, is a challenging task
[1, 16, 18, 23]. This happens due to the large variability of illumination and
background conditions. In addition, the face recognition for consumer applications
requires processing efficiency and accuracy simultaneously [17]. Note that it is
difficult to achieve both at the same time. If we want to get result in real time, we
may sacrifice accuracy. In that case, false positive and false negative may increase.
On the other hand, we can improve accuracy by focusing on non real time
detection. In this paper, we would like to sketch a solution that do real time face
recognition and facilitate as perceptual interface for home devices. It is important
to note that we will use current state of the art for face detection [2, 4, 5, 6, 7, 14,
15]. In other words, our focus is not concerned with the research of face detection,
but with face recognition.

In this paper, we use various classification techniques, namely, Support Vector


Machine [10, 21], Linear Discriminate Analysis (LDA) [8, 13, 18, 20] and Knearest
Neighbor (KNN) [9]. SVM is a powerful technique, which can predict not only for the
seen data, but also for the unseen data. It works well for both linear and non
linearly separable datasets. However, when dealing with too many classes, SVM
predictive power may decrease. The LDA is a powerful technique for predicting seen
data; however, it cannot predict unseen data. Furthermore, LDA may not work for
non linearly separable dataset. In particular when the dataset is high dimensional,
its performance degrades. KNN is a simple classification model that exploits lazy
learning. Since test image will compare against all training images, KNN encounters
high response time. Furthermore, KNN does not work well when data is high
dimensional. For the dimensionality reduction, we exploit PCA [11, 19] and select
important features in the projected space. Then these new projected features will
be used for testing in KNN case and for the training and testing of LDA classifier.
Here, we can apply PCA two ways. One choice is for the entire training set, we will
apply PCA which is class independent what we call global PCA (GPCA). The other
ix
choice is to apply PCA separately for each individual class what we call local PCA
(LPCA). LPCA takes into account individual class features.

In this paper, we use a benchmark dataset. We demonstrate that KNN is as good as


SVM in terms of classification accuracy. However, to speed up retrieval in case of
KNN, we exploit Global PCA (GPCA). We demonstrate that KNN with GPCA is as
accurate as simple KNN for face recognition. We also demonstrate that
classification accuracy of LDA can be improved by Local PCA (LPCA).

The organization of this paper is as follows. Related work is discussed in section 2.


The design of our face recognition system is described in section 3. Our results are
discussed in section 4. The paper is concluded with a discussion of research
directions in section 5.

2. Related work

Recently a significant number of researches have been accomplished on face detection and
recognition. With regard to face detection, a variety of image feature capturing algorithms
are developed, such as color, image pixel in the spatial domain and transformed image
signals in DCT/Wavelet domain. Various classifiers are used based on these features to
identify face regions from nonface background images. For example, face detector based on
skin color segmentation [2] is relatively fast; however, only color features are not very
accurate in detecting face from a picture. On the contrary, in [3] support vector machine
has been used on face detection to gain accuracy but it incurs high training /processing
complexity. In addition, to build a face detector that is highly efficient and robust a cascade
detector is proposed [17, 23] that employs successive face detectors with incremental
complexity and detection capability.

With regard to face recognition subspace representation has been widely used.
‘Eigenface’[26] and ‘Fisherface’ [27,28] are two approaches that fall into this category. The
‘Eigenface’ approach derives its subspace from a principal component analysis (PCA). On the
other hand, the ‘Fisherface’ approach uses a Fisher discriminant analysis (FDA).
Furthermore, there is an increasing trend to apply kernel subspace representation [28, 12].

x
3. Face recognition system

First, we extract features for images. Second, we train classifier for training images
and generate model for classes. Finally, these classification models will be used to
predict test images (see Figure 1).

Face Detection

LDA Outcome

Outcome
LPCA
+LDA

Features
Extraction, Outcome
Normalization KNN

GPCA Outcome
+ KNN

SVM Outcome

Figure 1. Steps in face recognition system

3.1. Feature extraction

A 2-dimensional face image will be represented as a vector. This vector will be


formed by concatenating each row (or column) of the image. For example, 30×32 a
two dimensional image will be represented by a vector of 960 dimensions.

3.2. Classification

In this paper, we study various classifiers, namely, Linear Discriminant Analysis


(LDA), K-nearest neighbor (KNN) and Support Vector machine (SVM).

xi
Let X=(x1, x2, …, xi, …, xN) represent training images where each xi is a feature
vector of dimension of n. This vector is formed by concatenating from a l×m face
image where l×m=n. Here, n is the total number of pixels in the face image and N is
the total number training images.

3.2.1. Linear discriminant analysis (LDA). Here, we generate a linear transformation


that transforms the
original image vector to a projection feature vector, i.e.,

Y W X= T

(1)

Where W is transformation matrix with n×d and Y is projected matrix with d×N.
Note that d<<n. LDA finds W
such that

W S WT
WLDA = argmaxW S WT WB
w
(2)

Where Sb is between-class scatter matrix and Sw is


within-class scatter matrix.

c
SB =∑Ni (µ µµ µi − )( i − )T
i=1
(3)
c
Sw =∑∑ (xk −µ µi )(xk − i )T
i=1 xk∈Xi
(4)

Here, Ni is the number of training samples in class i, c is the number of distinct


classes, µi is the mean vector of training data that belongs to class i. and xk is the
training set that belongs to class i and µ vector is the total mean for all the training
images, X.

xii
Hence, vector of a test image is transformed and will be classified using the
Euclidean distance of the test vectors from each projected or transformed class
mean (see Eq. 5).

dist _ n = (transformed _ spec)T × -χµntrans

(5)

Here µntrans is the mean of the transformed data set, n is the class index and χis
the test vector. Therefore, for c classes, c Euclidean distances are obtained for each
test vector. The smallest Euclidean distance among c distances determines the class
label of the test image.
3.2.2. K-nearest neighbor (KNN). This classifier is a simple algorithm that stores all
available examples and classifies new instances of the example language based on a
similarity measure. It exploits lazy learning [9]. For each test image ( to be
predicted), locate the k closest members (the K nearest neighbors) of the training
data set. A Euclidean Distance measure is used to calculate how close each member
of the training set is to the test class that is being examined. From this K nearest
neighbor, we find their class labels and apply majority voting to determine the class
label of test image. The best choice of k depends upon the data; generally, larger
values of k reduce the effect of noise on the classification, but make boundaries
between classes less distinct.

The problem here is that K-NN is very slow for classification. This happens due to
two reasons. First, in KNN test data will be compared with every instance of training
data. Next, high dimensionality of data may contribute the slowness. In this paper,
we address the second issue by Principle component analysis (PCA). The accuracy of
the KNN algorithm can be severely degraded by the presence of noisy or irrelevant
features. To discard irrelevant feature, we exploit PCA (See Section 2.2.4).

Support vectors

margin

Figure 2. Illustration of support vectors and margin of a linear SVM.

xiii
3.2.3. Support vector machine. SVM can perform either linear or non-linear
classification. The linear classifier proposed by Vladimir Vapnik creates a hyperplane
that separates the data into two classes with the maximummargin [3, 10, 12, 21].
Given positive and negative training examples, a maximum-margin hyperplane is
identified which splits the training examples, such that the distance between the
hyperplane and the closest examples is maximized. The non-linear SVM is
implemented by applying kernel trick to maximum-margin hyperplanes. The feature
space is transformed into a higher dimensional space, where the maximum-margin
hyperplane is found. This hyperplane may be non-linear in the original feature
space. A linear SVM is illustrated in Figure 2. The circles are negative instances and
the squares are positive instances. A hyperplane (the bold line) separates the
positive instances from negative ones. All the instances are at least at a minimal
distance (margin) from the hyperplane. The points that are at a distance exactly
equal to the hyperplane, are called the support vectors. As mentioned above, the
SVM finds the hyperplane that has the maximum-margin among all hyperplanes
that can separate the instances.

3.2.4. Feature Reduction: Principal Component Analysis. Here, we deal with high
dimensional data. For example, a person’s face vector may be represented by 1760
dimensions which is high dimensional. We notice that LDA does not work well for
high dimensional data and response time for KNN is not good either. For this, we
will investigate a technique for the dimensionality reduction using PCA [19]. In
addition, dimensionality reduction will save recognition time (i.e., real time).

Here we construct a transformation matrix W that formulates the eigen vectors of


the scatter matrix ST ,

N
ST =∑(xi −µ µ)(xi − )T
i=1
(6) Here, N is the total number of training
samples, µ vector is the total mean for all the training images and xi corresponds to
each training sample. We can apply PCA in two ways:

Global PCA (GPCA): First, we will apply PCA for entire training dataset regardless of
classes. This is known as global PCA (GPCA), shown in figure 3. For this, Equation 6
can be used. We adopt this strategy with KNN. Using PCA we reduce dimension of
training dataset and then apply KNN in the projected space to speed up retrieval.

xiv
Figure 3. Apply global PCA on entire training set Local PCA (LPCA): Next, we will
apply PCA for each individual class in case of LDA. Hence, if we have 10 classes, for
each class we will apply PCA separately, and we will end up with 10 transform
matrices. This later reduction technique is known as local PCA, shown in figure 4.
After projection by PCA, we will apply LDA. Therefore, for local PCA, we construct
transformation matrix Wi that formulates the eigen vectors of the scatter matrix STi
for each class i.

Ni
STi = ∑ (xk −µ µi )(xk − i )T
k=1 xk∈Xi
(7)

Here, Ni is the number of training samples in each class i, c is the number of


distinct classes, µi is the mean vector of training data that belongs to class i. and xk
is the training set that belongs to class i.

xv
Figure 4. Applying local PCA on each class

This transformation matrix consisting of eigen vector that corresponds to d largest


eigen value. After applying this transformation, the n dimensional feature vector
will be reduced to d dimensional vector.
4. Results

We use face recognition datasets of CMU


(http://www.cs.cmu.edu/afs/cs.cmu.edu/user/avrim/www/
ML94/face_homework.html) and Michigan State Dataset
(www.cse.msu.edu/~cse891/Sect601).

Here we used 2 set of datasets. Due to space limitation, we have reported results in
terms of a single dataset. Similar results are obtained for the other datasets. The
database for this set contains 50 users and there are 10 face images for each user.
For each user 8 face images are used for training and 2 images are used for testing.
Hence, total 400 images are used for training and 100 images are used for testing.
Each image is a 42x42 (= 1764) gray scale image.

For classification, we apply, KNN, GPCA+KNN, LDA, LPCA+LDA and SVM. In this
study, we report our results in terms of accuracy. In Figure 5, X-axis represents
projected dimension and Y-axis represents accuracy of various approaches for the
mentioned datasets. Here, accuracy means percentage of correctly classifying face
images. In Figure 6, X-axis represents projected dimension and Y-axis represents
response time. In Figure 7, X-axis represents K-value and Y axis represents accuracy.

We have performed PCA on the images to extract the characteristics features from
each image. Here, we have applied class dependent PCA (LPCA) and class
independent PCA (GPCA) for training set. In other words, with regard to LPCA, we
have applied PCA separately for each class in the training set. Therefore, for each
xvi
particular class we derived a transform metrics rather than a global metrics. Hence,
in our training set, there were 50 classes and we have 50 such transformation
metrics. With regard to GPCA, we have ended up with a single transformation
matrix. In our experiment, we varied dimensionality of PCA from 2 to 1764. For
KNN, we used GPCA+KNN. On the other hand, for LDA we used LPCA+LDA. Hence,
for local PCA (LPCA) these new projected dimensions represent the important
characteristics of images for a particular class. This may happen due to the usage of
all the features that appear in images on their classification. So, we have used an
efficient dimension reduction technique, namely PCA to reduce the dimensions of
the images and therefore extract only the important features where LDA will work
well. Furthermore, the discarded dimensions are not significant characteristics of
images, thus we can say that these are noisy.

In Figure 5, we have shown that when we vary dimensions from 2 to 1764,


GPCA+KNN (K=1) gives the best performance, 96% at newly projected dimension 50
for test set. On the other hand, when the dimension is 2, test set has the lowest
accuracy (i.e., 15%). Furthermore, when the dimensions are very low
misclassification rate is high. But at certain point (50) it achieves the highest value
and after then it again slopes down or becomes flat. In our case after 50 dimensions
it becomes flat (i.e., 92%). GPCA+KNN outperform LPCA+LDA for a particular
dimension. For example, GPCA+KNN observe 96% accuracy as compared to 56% of
LPCA+LDA for dimension 50. Furthermore, LPCA+LDA also observes similar accuracy
pattern as like as GPCA + KNN with the increasing value of dimension. Here, for
dimension 700, LPCA+LDA observe 94% accuracy. In addition, we have implemented
GPCA+LDA but we have observed vary low accuracy (<40%). Therefore, in this
paper, we have not reported this result. Without PCA, sole LDA observes 78%
accuracy. . Without PCA, when we applied KNN solely, KNN observes 92% accuracy.
Therefore, we can
say that KNN with PCA is better than LDA with PCA

Figure 5. Accuracy for test datasets using


xvii
GPCA+K-NN and LPCA+LDA for different dimension values

Although accuracy of KNN is good but the problem here is that K-NN is very slow for
classification. This happens due to two reasons. First, in KNN test data will be
compared with every instance of training data. Next, high dimensionality of data
may contribute the slowness. In this paper, we address the second issue by PCA.
Here, we have demonstrated that reduction of dimensionality using PCA will
improve response time. With PCA we varied projected dimension from 2 to 1764.
At the beginning for lower projected dimension (say 2, 6), accuracy is low but
response time is less. For projected dimension (50) accuracy reaches its maximum
values (96%) with lower response time (0.203 sec). After that accuracy becomes flat
and response time increases very sharply (for dimension 1764 response time 4.2
sec). In Figure 6, that if we reduce the dimension from the 1764 to 50 for test set,
GPCA+KNN reduces the response time from 4.2 sec to 0.203 sec. Recall that with
projected dimension 50, we have achieved maximum accuracy. Hence, with
reduced projected dimension, we can save 95% running time. This demonstrates
the superiority of GPCA+KNN over KNN only.

Figure 6. Response time for test datasets using GPCA+K-NN for different
dimensions

So far in case of KNN we always set K=1. In Figure 7, we would like to study the
impact of K on accuracy. Here, we fixed dimensionality to 50 for GPCA+KNN. We
observe that with the increasing value of K, accuracy gradually goes down. For
example, for K=1, KNN and

xviii
GPCA+KNN observe 92% and 96% accuracy respectively. On the other hand, for
K=10, KNN and
GPCA+KNN observe 76% and 83% accuracy respectively. Furthermore, for a fixed K,
GPCA+KNN outperforms KNN in terms of accuracy. Finally, Figure 7 demonstrates
that K=1 is the best choice and hence, in this paper, we have used K=1 for reporting
various result.

In Figure 8, we have shown the accuracy of different classifiers, namely, SVM [22]
with linear kernel, SVM with polynomial kernel, SVM with RBF, LDA alone, LDA with
PCA (LPCA+LDA) and KNN with
PCA(GPCA+KNN). We have observed that for test set, SVM+ Linear Kernel, SVM+
Polynomial Kernel, and GPCA+KNN performed almost equally well. However, we
have observed that SVM+RBF kernel and LDA alone did not do well for accuracy.
For example, we achieved 97.93%, 96.9%, and 48.9% accuracy in Linear, Polynomial
and RBF types of kernel respectively (shown in Figure 8). On the other hand, we
have observed 78%, 94%, and 96% accuracy for LDA, LPCA+LDA, GPCA+KNN
respectively.

Figure 7. Accuracy for test dataset using only K-NN and GPCA(50)+K-NNN for
different K values

xix
12
0
10
0
8
Accuracy
0
6 Test
0 DataSet
4
0
2
0
0
SVM+PolynoM+RB
Linea LD GPCA(50)+K-
A(700)+LD
a F A A Nea
r mi
SV+ SV
M LP
C
Different
classifiers

Figure 8. Accuracy for test data sets using different classifiers

In conclusion, in this study, we can say that for face recognition, SVM, and KNN are
effective classification techniques. Furthermore, to speedup retrieval for KNN we
advocate to the usage of PCA.
5. Conclusion and Future Work

In this paper, we propose a near real time face recognition system for home
application domain. For this, first, we apply various classification techniques,
namely, SVM, LDA, KNN. For KNN case we advocate feature reduction technique to
speed up retrieval. Finally, we show that KNN with PCA will be as effective as KNN
and SVM.

We would like to extend our research in the following directions. First, we would
like to implement face detection technique to facilitate real time feature extraction.
Second, we would like to extend this work as a full fledge prototype. We would also
like to use boosting and bagging.

The research reported in this paper is part of the data and applications security
research at the University of Texas at Dallas [29], [30]. We are conducting research
in three major areas: Assured Information Sharing,
Geospatial Data Management and
Surveillance/Biometrics. Automatics Face Recognition falls under the Surveillance
and Biometrics area. Our goal is to develop technologies for security applications. At
the same time, we are also aware of the privacy concerns and are conducting
research in privacy preserving data mining as well as privacy preserving surveillance
[31], [32]. In addition, we are also investigating the applications of RFID

xx
technologies for security and supply chain management as well as the privacy
implications.

We believe that biometrics in general and automatic face recognition in particular


will have many applications in surveillance and privacy preserving surveillance.

5. References

[1] HomeNet2Run project website:


http://www.extra.research.philips.com/euprojects/hn2r/

[2] J.Yang, W.Lu and A.Waibel, Skin-color modeling and adaptation, Proc. ACCV,
pp.687–694, 1998.

[3] Y.Ma and X.Ding, Face detection based on hierarchical Support Vector
Machines, Proc. ICPR, pp.222–225, 2002.

[4] F.Zuo and P.H.N. de With, Fast human face detection using successive face
detectors with incremental detection capability, Proc. SPIE, 5022, 2003.

[5] H.Rowley, S.Balujua and T.Kanade, Neural network-based face detection, IEEE
Trans. PAMI. 20(1), pp. 23–28, 1998.

[6] F.Zuo and P.H.N.de With, Fast facial feature extraction using a deformable
shape model with Haar-wavelet based local texture attributes, to be published in
Proc. ICIP, 2004.
[7] T.Cootes, An introduction to active shape models, in Image Processing and
Analysis, pp. 223–248, 2000.

[8] F.Zuo and P.H.N.de With, Two-stage face recognition incorporating


individual-class discriminant criteria, Proc. WIC 2004, pp. 137–144, 2004.

[9] T. Mitchell, Machine Learning, McGraw Hill, 1997.

[10] V. N. Vapnik, “The Nature of Statistical Learning Theory”.


Springer, 1995.

[11] Glenn Shafer. A Mathematical Theory of Evidence, Princeton University Press,


1976.

xxi
[12] J. Platt. Probabilistic outputs for SVMs and comparisons to regularized
likelihood methods. In Advances in Large Margin Classifiers. MIT Press, 1999.

[13] R.A. Fisher. The use of multiple measurements in taxonomic problems.


Annals of Eugenics, 7:179-188, 1936.

[14] R.L.Hsu, M.A.Mottalab and A.K.Jain, Face detection in color images, IEEE
transaction on pattern analysis and machine intelligence, Vol 24, No.5, May 2002.

[15] Y.H.Chan and S.A.R.Abu-Bakar, Face detection system based on feature-


based chrominance color information, Proceedings of the International Conference
on Computer Graphics, Imaging and Visualization (CGIV’04), 2004.

[16] Y.Gao and Maylor K.H. Leung, Face recognition using line edge map, IEEE
transaction on pattern analysis and machine intelligence, Vol 24, No. 6, June 2002.

[17] F. Zuo, Peter H.N de With, Real time embedded face recognition for smart
home, IEEE transactions on consumer Electronics, Vol. 51 , No.1, February 2005.

[18] C. Liu, H.Wechsler, Gabor Feature Based Classification Using Enhanced Fisher
Linear Discriminate Model for Face Recognition, IEEE Transactions on image
processing, vol. 11, No. 4, April, 2002.

[19] O.Veksler, Pattern Recognition Lecture


http://www.csd.uwo.ca/faculty/olga/Courses/CS434a_541a/

[20] S. Balakrisnama, A. Ganapathiraju, Linear Discriminate Analysis Tutorial,


http://lcv.stat.fsu.edu/research/geometrical_representations_
f_faces/PAPERS/lda_theory.pdf

[21] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal


margin classifiers. In D. Haussler,editor, 5th Annual ACM Workshop on COLT, pages
144-152,
Pittsburgh, PA, 1992. ACM Press.

[22] http://www.csie.ntu.edu.tw/~cjlin/libsvm/

[23] Jones, MJ; Viola, P., "Fast Multi-view Face Detection",


IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2003.

xxii
[24] S. Zhou and R. Chellappa. “ Face Recognition from Still Images and Videos”,
Handbook of Image and Video Processing,
2nd Edition, A. Bovik (Ed.), Academic Press, 2005

[25] S. Zhou and R. Chellappa, Intra-personal kernel space for face recognition,
IEEE international Conference on Automatic Face and Gesture Recognition (FG), pp.
235-240, May 2004.

[26] M.Truk and A.Pentland, Eigenfaces for recognition.


Journal of Cognitive Neutoscience, 3:72-86, 1991.

[27] P.n. Belhimeur, J.P. Hespanha, and D.J. Kriegman. Eifgenfaces vs. fisherfaces:
Recognition using class specific linear projection. Ieee Trans. PAMI,19, 1997.

[28] K.Etemad and R. Chellappa, Discriminant analysis for recognition of human


face images. Journal of Optical Society of America, pages 1724-1733,1997.

[29] B. Thuraisingham, Web Data Mining Technologies and Their Applications in


Counter-terrorism, CRC Press, 2003.

[30] B. Thuraisingham, Database and Applications Security: Integrating Data


Management and Information Security, CRC Press, 2005.

[31] B. Thuraisingham and P. Pallabi, Data Mining for Biometrics Applications, UTD
Technical Report, 2006.

[32] B. Thuraisingham, Privacy Preserving Data Mining: Developments and


Directions, Journal of Database Management, Special Issue in Data Management for
National

xxiii

You might also like