Volume 5, Issue 2, February – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Real Time Face Parcing Using
Enhanced KNN and DLIB
[1]
Nikhil Chug [2] Samarth Mehrotra [3] Sameya Alam [4] Anand Gaurav [5] Girish.N.
[1][2][3][4]
BE Students, Department of Information Science and Engineering
[5]
Asst. Professor, Department of Information Science and Engineering
[1][2][3][4][5]
Dayananda Sagar Academy of Technology and Management, Bangalore, Karnataka, India
[1][2][3][4]
Email Addresses
Abstract:- The conventional method of finding a sample data is available. Similarly, as we can see in our
missing person involves lodging an FIR in nearby police problem statement ample data is hard to find. We try to
station and police will then circulate the person’s photo train our model using single sample per person (SSPP)
to all the nearby police stations. This process is very Jianquan Gu et.all [1].
time consuming. The idea is to automate this process by
using Facial Recognition. The purposed algorithm is Facial features such as eyes, nose, lips and face
implemented using enhanced KNN, dlib and OpenCV. contour are considered as the action units of face and are
The presented approach uses dlib to generate a total of extracted using open source software called dlib K. Bharat
68 exclusive facial key features. 136 points are S Reddy et.all [18]. Dlib is used to generate a total of 68
generated in total which are floating point exclusive facial key features. 136 points are generated in
numbers(point 10 precision). Thus, we have decided to use total which are floating point numbers (point 10 precision).
Enhanced KNN algorithm. We use this algorithm for Then these generated points are converted to strings with
matching faces. This form k groups using the cases that the help of simple encoding.
have been registered. The traditional KNN strategy has
different deficiencies we propose to upgrade its Sometimes the test image might be of low resolution
precision utilizing these techniques. Need of qualities and image present in our data set is of high resolution. So
and best neighborhood size are considered to ascertain comparing a low resolution test image to a high resolution
increasingly exact separation capacities and to get image affects the performance of their matching Deep
precise outcomes. Rather than basic democratic Coupled ResNet (DCR) model is used Lu et.all [2] that
strategy we propose to utilize likelihood class estimation consists of two branch and one trunk network.
technique. Discriminative features are extracted using this model. Two
branch networks which are trained using HR images and
Keywords:- Facial Recognition, Machine Learning, targeted LR image are considered working as resolution-
Enhanced KNN, DLIB, PyQt5, DCR, OpenCV. specific coupled mappings which also transforms LR and
HR features to a space where their difference is minimized.
I. INTRODUCTION Optimization of model parameters is performed using
proposed Coupled-Mapping loss function. The proposed
In India, one of the major issue is the increasing rate model considers the discriminability and similarities of HR
of missing children which has crossed almost 170 per day and LR features. Different pairs of tiny branch networks are
amongst which half of them remain untraced. Not only trained to cope with the different image resolutions Lu et.all [2].
children but also old person who are suffering from
Alzheimer disease go missing. According to a survey The II. METHODOLOGY
Hindu, amongst the missing, less than 50% are traced.
Face Recognition
Facical Recognition(FR) has been a difficult field. It is Face acknowledgment is a procedure of
hard to distinguish a face picture which comprises of distinguishing proof of an individual's face in a picture or
changing enlightenments. Neto. J. G. D. S. et.all [3]. video. This incorporates different advances that should be
pursued. Figure.1 shows the square chart of the framework,
The conventional approach to recognize faces which incorporates face recognition and highlight
involves training the model on large data sets. But in real extraction.
life scenarios more often than not, limited or even a single
IJISRT20FEB506 www.ijisrt.com 946
Volume 5, Issue 2, February – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Fig 1:- Face Recognition block diagram
The algorithm is planned in a manner that it extracts 68 unique key facial features. 136 floating key points are generated
with a precision of about 10.
These facial key focuses are utilized to particularly recognize facial highlights, for example, nose, eyes, nose, lips which are
considered as the activity units of the face.
Fig 2:- Feature Extraction
OpenCV We are using OpenCV to detect the faces and
Open Source Computer Vision abbreviated as recognize them and to also find any similar images to the test
OpenCV is a programming function library and a cross image present in our database.
platform providing a common infrastructure. This library
mainly aims at real-time computer vision and to speed up Enhanced KNN
the use of machines used in various industries. It also KNN algorithm is mainly used for classification
includes efficient algorithms which is more than 2500 in problems but it also includes regression predictive
number. This library includes an immense set of state of the problems.
art computer vision. This library helps to recognise objects,
track camera gestures follow eye movements, extract 3D For measuring the similarity or differences between
models of objects, perceive and identify faces, remove red test instance and training instance, KNN uses standard
eyes from images, tracking of the moving objects and Euclidean distance. This standard Euclidean distance
detect indistinguishable images from an image database d(xi,xj) is defined in the following equation 1.1:
setc. these algorithms can be used efficiently in these
scenarios. OpenCV has a user community consisting of about d(xi,xj) = √(𝑎𝑟(𝑥𝑖) − 𝑎𝑟(𝑥𝑗))2
50 thousand people with an estimated number of
downloads that exceeds 14 million.
IJISRT20FEB506 www.ijisrt.com 947
Volume 5, Issue 2, February – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
KNN considers the most widely recognized closest Distance formula and K nearest neighbors are found.
neighbors to appraise test information. It is characterized in
the accompanying condition 1.2: Step 7: The classmark with the most extreme likelihood of
chosen k neighbors is picked as the class name of the test.
c(x) = arg maxc∑𝑘𝑖=0 𝛿(𝑐, 𝑐(𝑦𝑖)) The proposed calculation incredibly lessens the execution
time and improves the exactness as it is a blend of grouping
Here 𝑦1, 𝑦2, …, 𝑦𝑘 are the k nearest neighbors of test and bunching strategies and diminishes the time intricacy.
instance, k is the number of neighbors, C represent the finite
set of class labels and 𝛿(𝑐,𝑐(𝑦𝑖)- = 1) 𝑖𝑓 𝑐 = 𝑐(𝑦𝑖) 𝑎𝑛𝑑 𝛿(𝑐, Classification: The real order of test information is
𝑐(𝑦𝑖)- = 0 𝑜𝑡h𝑒𝑟𝑤𝑖𝑠𝑒 Shweta Taneja et.all [10]. done in this part. This part runs each time when it does
characterization.
The upgraded KNN calculation incorporates the
accompanying advances: The calculation is required to decrease the
wastefulness of traditional K closest neighbor calculation.
Step 1: Entropy of each credit is determined to get data The proposed calculation is separated into two significant
addition of properties. Needs are alloted to each weight parts:
quality dependent on the above estimation. Information pre-handling: In the information pre-
preparing part K esteems for the test tests are resolved,
Step 2: The following stage is to discover the estimation of the preparation informational index is partitioned into
k for the preparation set. various bunches and weight is given to various groups.
Future information is arranged dependent on the
Step 3: We isolate the preparation set into various clusters. aftereffects of the model. This part doesn't influence the
proficiency of the calculation as it just runs once in the
Step 4: To obtain the center of each cluster, we find mean framework.
of every cluster. Arrangement: The grouping of the real information is
done in this part. This part runs over and over at
Step 5: Using Euclidean Distance formula, we try to whatever point the order occurs. X. Xiao et.all [25].
determine the cluster which is closest to the test sample to
find the K nearest neighbors. We structured a stream graph so as to clarify the two
pieces of our calculation and it is appeared in Figure1 and 2
Step 6: The distance between each sample in the cluster individually.
and the test sample is calculated using Weighted Euclidean
Fig 3:- Data Pre-Processing
IJISRT20FEB506 www.ijisrt.com 948
Volume 5, Issue 2, February – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Fig 4:- Classification Process
Table 1:- Table on Literature Survey
IJISRT20FEB506 www.ijisrt.com 949
Volume 5, Issue 2, February – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
III. CONCLUSION [10]. Taneja, S., Gupta, C., Goyal, K., & Gureja, D. (2014).
An Enhanced K-Nearest Neighbor Algorithm
In this paper, we present an overview on facial Using Information Gain and Clustering. 2014
location approach dependent on improved KNN and DLIB. Fourth International Conference on Advanced
The exploratory outcomes have demonstrated that the Computing & Communication Technologies.
proposed technique accomplishes preferable presentation [11]. M. Yang, L. Van Gool, and L. Zhang, “Sparse
over the regular KNN strategies. Later on, we will variation dictionary learning for face recognition
concentrate on coordinating the proposed strategy to the with a single training sample per person,” in
current reconnaissance framework without acquiring any Proc. 2013 IEEE Int. Conf. Computer Vision,
additional expenses. Additionally, how to apply the Sydney, Australia, pp. 689−696, 2013. [10]
prepared KNN-based model in different fields, for Gu, J., Hu, H., & Li, H. (2018).
example, ongoing wrongdoing checking, ramble [12]. J. W. Lu, Y. P. Tan, and G. Wang,
reconnaissance frameworks. This paper gives a general “Discriminative multimanifold analysis for face
thought regarding the propelled AI calculations and recognition from a single training sample per
methods used to give answers for the facial person,” IEEE Trans. Pattern Anal. Mach. Intell.,
acknowledgment and location. vol. 35, no. 1, pp. 39−51, Jan. 2013.
[13]. M. Yang, L. Zhang, J. Yang, and D. Zhang,
REFERENCES “Regularized robust coding for face
recognition,” IEEE Transactions on Image
[1]. Gu, J., Hu, H., & Li, H. (2018). Local robust sparse Processing, vol. 22, no. 5, pp. 1753–1766, 2013
representation for face recognition with single [14]. Q. Liu and C. Liu, “A new locally linear KNN
sample per person. IEEE/CAA method with an improved marginal Fisher
Journal of Automatica Sinica, 5(2), 547–554. analysis for image classification,” in Proc. IEEE
[2]. Lu, Z., Jiang, X., & Kot, A. (2018). Deep IJCB, Sep./Oct. 2014, pp. 1–6.
Coupled ResNet for Low-Resolution Face [15]. M. N. Kan, S. G. Shan, Y. Su, D. Xu, and X. L. Chen,
Recognition. IEEE Signal Processing Letters, 25(4), “Adaptive discriminant learning for face
526–530. recognition,” Pattern Recognit., vol. 46, no. 9, pp.
[3]. Neto, J. G. D. S., Caldeira, J. L. M., & Ferreira, D. 2497−2509, Sep. 2013.
D. (2018). Face Recognition based on Higher- [16]. W. W. Zou and P. C. Yuen, “Very low
Order Statistics. IEEE Latin America Transactions, resolution face recognition problem,” IEEE Trans.
16(5), 1508–1515. Image Processing, vol. 21, no. 1, pp. 327–340,
[4]. Min, W., Fan, M., Li, J., & Han, Q. (2019). Real- 2012.
time face recognition based on pre- [17]. B. Li, H. Chang, S. Shan, and X. Chen, “Low-
identification and multi-scale resolution face recognition via coupled locality
classification. IET Computer Vision, 13(2), 165– preserving mappings,” IEEE Signal processing
171 letters, vol. 17, no. 1, pp. 20–23, 2010.
[5]. Liu, Q., & Liu, C. (2015). A Novel locally linear [18]. Reddy, K. B. S., Loke, O., Jani, S., & Dabre, K.
KNN model for visual recognition. 2015 IEEE (2018). Tracking People In Real Time Video
Conference on Computer Vision and Pattern Footage Using Facial Recognition. 2018
Recognition (CVPR). International Conference on Smart City and Emerging
[6]. Wei, Z., Liu, S., Sun, Y., & Ling, H. (2019). Technology (ICSCET).
Accurate Facial Image Parsing at Real-Time [19]. Min, R., Xu, S., & Cui, Z. (2019). Single-Sample
Speed. IEEE Transactions on Image Face Recognition Based on Feature
Processing, 28(9), 4659–4670. Expansion. IEEE Access, 7, 45219–45229.
[20]. Yang, F., Yang, W., Gao, R., & Liao, Q. (2018).
[7]. Wu, W., Yin, Y., Wang, X., & Xu, D. (2019). Face Discriminative Multidimensional Scaling for Low-
Detection With Different Scales Based on Faster Resolution Face Recognition. IEEE Signal
R-CNN. IEEE Transactions on Cybernetics, Processing Letters, 25(3), 388–392
49(11), 4017–4028. [21]. Zeng, D., Spreeuwers, L., Veldhuis, R., & Zhao, Q.
[8]. Yang, S., Zhang, L., He, L., & Wen, Y. (2019). (2019). Combined training strategy for low-
Sparse Low-Rank Component-Based resolution face recognition with limited
Representation for Face Recognition With Low- application-specific data. IET Image
Quality Images. IEEE Transactions on Processing, 13(10), 1790–1796
Information Forensics and Security, 14(1), 251– [22]. P. F. Zhu, M. Yang, L. Zhang, and I. Y. Lee, “Local
261. generic representation for face recognition with
[9]. C. Ding and D. Tao, “Robust face recognition via single sample per person,” in Proc. Asian Conf.
multimodal deep face representation,” IEEE Computer Vision Computer on Vision-ACCV 2014,
Trans. Multimedia, vol. 17, no. 11, pp. 2049–2058, Switzerland, pp. 34−50, 2014.
2015
IJISRT20FEB506 www.ijisrt.com 950
Volume 5, Issue 2, February – 2020 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[23]. M. N. Kan, S. G. Shan, Y. Su, D. Xu, and X. L. Chen, [37]. J. Wu, Z.-H. Zhou, "Face recognition with one
“Adaptive discriminant learning for face training image per person", Pattern Recognit. Lett.,
recognition,” Pattern Recognit., vol. 46, no. 9, pp. vol. 23, no. 14, pp. 1711-1719, Dec. 2002.
2497−2509, Sep. 2013.. [38]. F. Schroff, D. Kalenichenko, J. Philbin, "FaceNet: A
[24]. X. Jiang and J. Lai, “Sparse and dense hybrid unified embedding for face recognition and
representation via dictionary decomposition for clustering", Proc. IEEE Conf. Comput. Vis. Pattern
face recognition,” IEEE Transactions on Pattern Recognit. (CVPR), pp. 815-823, Jun. 2015.
Analysis and Machine Intelligence, vol. 37, no. 5, pp. [39]. A. M. Martinez, "Recognizing imprecisely localized
1067– 1079, 2015 partially occluded and expression variant faces
[25]. X. Xiao, and H. Ding, "Enhancement of K-nearest from a single sample per class", IEEE Trans.
Neighbor Algorithm Based on Weighted Entropy Pattern Anal. Mach. Intell., vol. 24, no. 6, pp. 748-
of Attribute Value," Proc. 5th International 763, Jun. 2002.
Conference on BioMedical Engineering and [40]. J. Pan, X.-S. Wang, Y.-H. Cheng, "Single-sample
Informatics (BMEI 2012), IEEE Press, Oct. 2012, pp. face recognition based on LPP feature
1261-1264, transfer", IEEE Access, vol. 4, pp. 2873-2884, 2016.
[26]. López-López, E., Pardo, X. M., Regueiro, C. V., [41]. Liu, J., Jing, X., Lian, Z., & Sun, S. (2015). Local
Iglesias, R., & Casado, F. E. (2019). Dataset bias Gabor Dominant Direction Pattern for Face
exposed in face verification. IET Biometrics, 8(4), Recognition. Chinese Journal of Electronics, 24(2),
249–258 245–250.
[27]. J. Zou, Q. Ji, and G. Nagy, “A comparative study of [42]. Chang, K.-Y., & Chen, C.-S. (2014). Facial
local matching approach for face recognition,” Expression Recognition via Discriminative
IEEE Trans. Image Process., vol. 16, no. 10, pp. Dictionary Learning. 2014 IEEE International
2617−2628, Oct. 2007. Conference on Internet of Things(IThings), and IEEE
[28]. T. Ahonen, A. Hadid, M. Pietikainen, "Face Green Computing and Communications (GreenCom)
recognition with local binary patterns", ECCV, pp. and IEEE Cyber, Physical and Social Computing
469-481, 2004. (CPSCom).
[29]. A. M. Martinez, "Recognizing imprecisely localized [43]. X. Jiang and J. Lai, “Sparse and dense hybrid
partially occluded and expression variant faces representation via dictionary decomposition for
from a single sample per class", IEEE Trans. face recognition,” IEEE Transactions on Pattern
Pattern Anal. Mach. Intell., vol. 24, no. 6, pp. 748- Analysis and Machine Intelligence, vol. 37, no. 5, pp.
763, Jun. 2002. 1067– 1079, 2015
[30]. M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, [44]. Ding, C., & Tao, D. (2015). Robust Face
“Face recognition by independent component Recognition via Multimodal Deep Face
analysis,” IEEE Trans. Neural Netw., vol. 13, no. 6, Representation. IEEE Transactions on
pp. 1450–1464, Jun. 2002. Multimedia, 17(11), 2049–2058.
[31]. B. Boom, G. Beumer, L. J. Spreeuwers, R. N. [45]. S. Baker and T. Kanade, "Limits on super-resolution
Veldhuis, "The effect of image resolution on the and how to break them," IEEE Transactions on
performance of a face recognition system", Proc. Pattern Analysis and Machine Intelligence, vol. 24,
Int. Conf. Control Autom. Robot. Vis., pp. 1-6, 2006. no. 9, pp. 1167 - 1183, September 2002.
[32]. Z. Wang , Z. Miao, Q. Wu, Y. Wan, and Z. Tang , [46]. H. X. Li, Z. Lin, X. H. Shen, J. Brandt, G. Hua, "A
“Low-resolution face recognition: A review,” Vis. convolutional neural network cascade for face
Comput., vol. 30, pp. 359–386, Aug. 2013. detection", Proc. IEEE Conf. Comput. Vis. Pattern
[33]. S. P. Mudunuri and S. Biswas, “Low resolution face Recognit., pp. 5325-5334, Jun. 2015.
recognition across variations in pose and [47]. S. Hayashi, O. Hasegawa, "Robust face detection for
illumination ,” IEEE Trans. Pattern Anal. Mach. low-resolution images", J. Adv. Comput. Intell. Intell.
Intell., vol. 38, no. 5, pp. 1034–1040, May 2016 Inf., vol. 10, no. 1, pp. 93-101, 2016.
[34]. C. Zhang and Z. Zhang, “Improving multiview face [48]. K. Zhang, Z. Zhang, Z. Li, Y. Qiao, "Joint face
detection with multi-task deep convolutional detection and alignment using multitask cascaded
neural networks ,” in Proc. IEEE Winter Conf. Appl. convolutional networks", IEEE Signal Process. Lett.,
Comput. Vis., 2014 , pp. 1036–1041. vol. 23, no. 10, pp. 1499-1503, Apr. 2016.
[35]. L. Jiang, Z. Cai, D. Wang, and S. Jiang, "Survey of [49]. X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang.
Improving K-Nearest-Neighbor for Classification," Face recognition using laplacianfaces. IEEE Trans.
Proc. 4rth International Conference on Fuzzy Systems Pattern Anal. Mach. Intell., 27(3):328-340, 2005
and Knowledge Discovery (FSKD 2007), vol. 1, Aug. [50]. L. Wolf, T. Hassner, Y. Taigman, "Effective
2007, pp. 679-683 unconstrained face recognition by combining
[36]. L. Wolf, T. Hassner, Y. Taigman, "Effective multiple descriptors and learned background
unconstrained face recognition by combining statistics", IEEE Trans. Pattern Anal. Mach. Intell.,
multiple descriptors and learned background vol. 33, no. 10, pp. 1978-1990, Oct. 2011.
statistics", IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 33, no. 10, pp. 1978-1990, Oct. 2011.
IJISRT20FEB506 www.ijisrt.com 951