KEMBAR78
Face Recognition Using Local Derivative Tetra Pattern | PDF | Computing | Computer Vision
0% found this document useful (0 votes)
22 views16 pages

Face Recognition Using Local Derivative Tetra Pattern

Uploaded by

sannikodavali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views16 pages

Face Recognition Using Local Derivative Tetra Pattern

Uploaded by

sannikodavali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Face Recognition Using Local Derivative Tetra Pattern

ABSTRACT
This paper proposes a new face recognition algorithm called local derivative tetra pattern
(LDTrP). The new technique LDTrP is used to alleviate the face recognition rate under real-
time challenges. Local derivative pattern (LDP) is a directional feature extraction method to
encode directional pattern features based on local derivative variations. The nth -order LDP is
proposed to encode the first (n-1)th order local derivative direction variations. The LDP
templates extract high-order local information by encoding various distinctive spatial
relationships contained in a given local region. The local tetra pattern (LTrP) encodes the
relationship between the reference pixel and its neighbours by using the first-order
derivatives in vertical and horizontal directions. LTrP extracts values which are based on the
distribution of edges which are coded using four directions. The LDTrP combines the higher
order directional feature from both LDP and LTrP. Experimental results on ORL and JAFFE
database show that the performance of LDTrP is consistently better than LBP, LTP and LDP
for face identification under various conditions. The performance of the proposed method is
measured in terms of recognition rate. The proposed Local Derivative Tetra Pattern (LDTrP)
method enhances face recognition performance by effectively combining the advantages of
both local derivative pattern (LDP) and local tetra pattern (LTrP) techniques. While LDP
extracts first-order local derivative information to capture edge variations, LTrP expands on
this by incorporating the relationship between the reference pixel and its neighbors through
higher-order derivatives in multiple directions. This fusion allows the LDTrP to exploit finer
spatial relationships and directional patterns, making it more robust to variations in lighting,
facial expressions, and other real-world challenges that often degrade the performance of
traditional face recognition methods. To evaluate the effectiveness of LDTrP, extensive
experiments were conducted on well-known face databases such as ORL and JAFFE. The
results consistently demonstrate the superiority of LDTrP over conventional methods such as
Local Binary Patterns (LBP), Local Ternary Patterns (LTP), and Local Derivative Patterns
(LDP) in terms of recognition rate under diverse conditions. The proposed method offers
improved discriminative power by encoding rich directional information, which contributes
to higher robustness and accuracy in face identification tasks. These findings suggest that
LDTrP is a promising approach for real-time face recognition applications, particularly in
environments with varying lighting, poses, and expressions.

INTRODUCTION

In recent years, face recognition plays an important role in intensive research. With the
current discerned world security situation, governments as well as private require reliable
methods to accurately identify individuals, without overly contravene on rights to privacy or
requiring significant compliance on the part of the individual being recognized. Face
recognition extends an acceptable solution to this problem. A number of techniques have
been applied to face recognition and they can be divided into two categories 1) Geometric
feature matching and 2) Template matching. Geometric feature matching [1][5][6] involves
segmenting the different features of the face: eyes, nose, mouth, etc. and extracting
illustrative information about them such as their widths and heights. Values of these measures
can then be stored for each person and it can be compared with those of the known
individuals. Template matching is a non-segmentation approach to recognize the face. Each
face is treated as a two dimensional array of intensity values, which is then compared with
other face’s intensity arrays. Earliest methods treated faces as points in very high dimensional
space and then the Euclidean distance between them is calculated. Dimensional reduction
techniques including Principal Component Analysis (PCA) [2][3] have now been
successfully applied to the problem, hence reducing complexity of the recognition process
without negatively infringing on accuracy.
Nowadays different patterns are used for feature extraction. The Local Binary Pattern (LBP)
is designed to encode the relationship between the referenced pixel and its surrounding pixels
[17]. LBP is applied successfully to all facial expression analysis. Its performance is much
better than Eigen face. LBP produces micro patterns. The center pixel is subtracted from the
eight neighbouring pixels. Assign 0 for negative values and assign 1 for the positive values.
These micro patterns are constructed by combining eight neighbouring bits clockwise. Due to
the simplicity and robustness, it has been widely used in face recognition [13][14]. However
this LBP can fail in some situations. In order to avoid such situation, the Local Tetra Pattern
(LTP) was introduced to capture more detailed information than LBP. LTP is an extension of
LBP. LTP is less sensitive to noise than LBP as well as small pixel difference is encoded into
a separate state. To reduce its dimensionality, the ternary code constructed by LTP is split
into two binary codes: a positive LBP and a negative LBP [14]. A threshold value is added
with the center pixel (u) and is subtracted from the center pixel(l) and generate a boundary
[l,u]. Assign -1, if the neighbouring pixel is lesser than l, assign 1 if the neighbouring pixel is
greater than u and assign 0 if it is lies between l and u [13]. This ternary code is split into two.
Assigns 0 to -1’s to construct higher bit pattern. Assign 1 to -1’s to construct lower bit pattern
and construct higher and higher bit pattern. LTP only encodes the texture features of an
image depending on the grey level difference between center pixel and its neighbours, which
are coded using two directions. The LBP and LTP methods, several other techniques have
been proposed for improving face recognition accuracy. Among these, the Local Derivative
Pattern (LDP) stands out as an effective feature extraction method that captures the derivative
variations between pixels, providing more detailed information about edge transitions within
an image. LDP improves upon traditional methods by encoding not just the intensity values
but also the gradient of intensity changes, allowing for better recognition of subtle facial
features. LDP’s ability to capture edge information is particularly valuable when working
with faces under varying lighting conditions or occlusions, making it more robust than
methods that rely solely on intensity values. Despite the advantages of LDP, challenges
remain in capturing higher-order structural information from facial images. This issue is
addressed by the introduction of Local Tetra Pattern (LTrP), which encodes the relationship
between the reference pixel and its neighbors using higher-order derivatives in both the
vertical and horizontal directions. By considering not only the immediate differences in pixel
values but also the local spatial variations, LTrP captures more comprehensive texture
patterns. This enhancement allows LTrP to extract richer information from facial images,
improving the model's ability to distinguish between individuals, particularly when dealing
with images that have variations in pose or expression. The proposed Local Derivative Tetra
Pattern (LDTrP) combines both the LDP and LTrP approaches to further enhance face
recognition accuracy. By merging the advantages of these two techniques, LDTrP can extract
more robust and distinctive features from facial images, capturing both fine-grained texture
details and higher-order spatial relationships. This fusion of lower and higher-order derivative
information provides a more comprehensive representation of facial characteristics,
improving the model's ability to perform under various challenges such as lighting changes,
expression variations, and partial occlusions. In order to assess the effectiveness of the
LDTrP method, extensive experiments were conducted on well-established face recognition
datasets such as the ORL and JAFFE databases. The results showed that LDTrP consistently
outperformed other traditional and state-of-the-art techniques like LBP, LTP, and LDP in
terms of recognition rate and robustness. The ability of LDTrP to detect distinctive patterns
from both local derivative variations and spatial relationships contributed to its superior
performance, making it a promising solution for face recognition tasks, especially in real-time
applications where high accuracy and speed are essential. Moreover, LDTrP's robust
performance across different facial expressions and poses demonstrates its potential for
practical applications in security systems and human-computer interaction technologies. The
robustness of LDTrP is further highlighted by its ability to handle variations in lighting and
pose, which are common challenges in real-world face recognition scenarios. Traditional face
recognition methods often struggle under such conditions due to their reliance on static,
pixel-based features. In contrast, LDTrP captures more dynamic patterns by considering both
the intensity variations and their directional derivatives, making it less sensitive to lighting
changes. Additionally, by encoding relationships between neighboring pixels in multiple
directions, LDTrP is capable of maintaining accurate recognition even when the face is
rotated or the viewing angle changes, offering significant advantages in uncontrolled
environments. Another key strength of LDTrP lies in its computational efficiency. While the
method captures a wealth of information from the image by combining high-order and low-
order patterns, it does so in a way that does not drastically increase computational
complexity. The combination of LDP and LTrP allows for a compact representation of the
image, enabling faster processing times compared to more complex face recognition methods
that require higher-dimensional feature spaces. This makes LDTrP a viable solution for real-
time applications, including surveillance systems and mobile face recognition, where both
accuracy and speed are paramount. The LDTrP algorithm demonstrates promising scalability
for various face recognition tasks beyond the typical databases used in experiments. The
method's flexibility allows it to be adapted to diverse facial datasets, with different
demographic groups, lighting conditions, and facial expressions. This adaptability is crucial
for building face recognition systems that can generalize well across a wide range of
environments and users. As facial recognition technology continues to expand into new fields
such as smart home devices, autonomous vehicles, and personalized healthcare, LDTrP's
ability to maintain high recognition rates under diverse conditions positions it as a powerful
tool for next-generation biometric systems.

Literature survey
Texture Extraction for Image Retrieval Using Local Tetra Pattern (2014): Content
Based Image Retrieval(CBIR) is one of the prominent area to retrieve images from a large
collection of database. There is wide range of texture analysis techniques used for feature
extraction of an image. In this paper, we have proposed image indexing and retrieval
algorithm for texture extraction using Local tetra pattern (LTrP). The local binary pattern
(LBP) and local ternary pattern (LTP) encode the texture features of an image depending on
the grey level difference between reference pixel and its neighbors. The LTrP encodes the
relationship between the reference pixel and its neighbors by using the first-order derivatives
in vertical and horizontal directions. Local tetra pattern (LTrP) extracts information based on
the distribution of edges which are coded using four directions. To get the retrieval result we
used Corel 1000 database. The performance of the proposed method is measured in terms of
average precision and average recall. The performance analysis shows that the proposed
method improves the retrieval result as compared with standard LBP.
Recent years have seen a rapid increase in the size of digital image collections. The image
retrieval techniques are becoming very important part in the multimedia information retrieval,
and they are most widely used in applications, such as in web related applications,
agricultural applications, biomedical applications, earth and space sciences etc. Basically
there are two research communities, the first one is text based image retrieval and the other is
content based image retrieval (CBIR). Text based image retrieval gives less complexity
method and they are widely used in image retrieval. But manual annotation is required to
assist the text based retrieval process. Due to that, the text based image retrieval is not
preferable in case of images. The feature extraction in CBIR is a prominent step whose
effectiveness depends upon the method adopted for extracting features from given images.
The CBIR utilizes visual contents of an image such as color, texture, shape, faces, spatial
layout, etc., to represent and index the image database [1].Feature (content) extraction is the
basis of content-based image retrieval. In this work, we propose a novel image texture feature
extraction algorithm using local tetra patterns (LTrPs) for content-based image retrieval
(CBIR).
The proposed method encodes the relationship between the referenced pixel and its
neighbours, based on the directions that are calculated using the first-order derivatives in
vertical and horizontal directions. In retrieval process, every pixel value of query image is
compared with every pixel of test image using a distance measure to find some similar
pictures to the query image. Two major approaches including spatial and transform domain
based methods can be identified in CBIR systems. The first approach usually uses pixel or a
group of adjacent pixels features like color, texture, and shape. Other uses different
transforms like Gabor transform, Wavelet transform &Daubechieswavelet coefficients etc.[2]
[3]. The Local Tetra Pattern (LTrP) used in this approach provides a robust method for
encoding texture features by considering the spatial distribution of edges within an image.
Unlike traditional texture extraction methods such as Local Binary Pattern (LBP), which rely
on simple binary encoding, LTrP uses first-order derivatives to capture more detailed
directional information. This makes it more sensitive to changes in texture, providing a more
accurate representation of complex textures and improving the retrieval performance. By
encoding edge distribution across multiple directions, LTrP captures finer details that are
often lost in simpler models, leading to better differentiation between similar textures. In the
context of image retrieval, the use of LTrP significantly enhances the capability of content-
based image retrieval systems. The method’s performance is evaluated through standard
metrics such as average precision and average recall, which measure how well the system
retrieves relevant images from a database. Experimental results using the Corel 1000
database demonstrate that the LTrP-based retrieval algorithm outperforms traditional texture-
based methods like LBP in terms of both precision and recall. This improvement indicates
that LTrP is better at identifying and distinguishing subtle textural features, making it more
effective for real-world applications. Moreover, LTrP is adaptable to a variety of image types
and domains. In the case of CBIR, where the aim is to retrieve images based on content rather
than metadata or tags, the algorithm can be applied to diverse collections ranging from
natural images to medical images. The versatility of LTrP makes it suitable for applications
in fields such as remote sensing, where textures can vary dramatically depending on the type
of land or vegetation, or in medical imaging, where tissue textures are crucial for diagnosing
diseases. Its ability to adapt to different domains makes it a highly valuable tool for content-
based retrieval systems. The computational efficiency of LTrP allows for faster retrieval
times without sacrificing performance. As the size of image databases continues to grow, the
need for efficient retrieval algorithms becomes more critical. The LTrP algorithm provides a
balance between extracting rich texture information and maintaining relatively low
computational complexity. This ensures that it can be used effectively in large-scale image
retrieval systems, offering both speed and accuracy. The proposed method’s scalability
makes it ideal for commercial and industrial applications where fast, reliable image retrieval
is required.
An Image Indexing and Retrieval Algorithm Using Local Tetra Texture Features
(2013): Content Based Image Retrieval gives the path to retrieve the needed information
based on the image content. The earlier versions of CBIR was based on Local Binary Pattern,
Local Derivative Pattern and Local Ternary Pattern. These methods extract information based
on the distribution of edges, which are coded using only two directions. The performance of
these methods are little less and thus it can be improved by differentiating the edges in more
than two directions. The performance is increased by four direction code, so called local tetra
pattern (LTrPs) for CBIR. This method encodes the relationship between the referenced pixel
and its neighbors based on the directions that are calculated using the first-order derivatives
in vertical and horizontal directions. This project proposes generic strategy to compute LTrP
in horizontal, vertical and diagonal derivatives.
Content-Based Image Retrieval (CBIR) has emerged as a prominent method for retrieving
images based on their visual content rather than metadata or keywords. Traditional CBIR
methods, such as Local Binary Patterns (LBP), Local Derivative Patterns (LDP), and Local
Ternary Patterns (LTP), rely on capturing information from image textures and edges using
limited directional coding. While these techniques have shown some success, they typically
struggle to capture complex edge relationships effectively, as they only consider two
directions for encoding the edges, which limits their ability to distinguish fine-grained texture
details. This restriction results in less accurate retrieval performance, particularly when
dealing with images with intricate or subtle textures. To overcome these limitations, this
work introduces a more advanced approach, the Local Tetra Pattern (LTrP), which
incorporates edge information from four directions instead of the traditional two. By
leveraging first-order derivatives in horizontal, vertical, and diagonal directions, LTrP can
more effectively capture the relationships between a pixel and its neighbors, providing richer
and more detailed texture information. This enhancement allows for improved indexing and
retrieval performance, particularly in more complex image datasets. The proposed strategy
aims to provide a generic and efficient method to compute LTrP, offering a significant
improvement over earlier techniques in terms of accuracy and robustness, thereby advancing
the state of the art in CBIR. The Local Tetra Pattern (LTrP) method addresses the limitations
of previous CBIR techniques by enhancing the encoding of edge information. While methods
like LBP, LDP, and LTP capture texture features based on two directions, LTrP uses four
distinct directional derivatives—vertical, horizontal, and diagonal. This broader directional
approach helps capture more detailed and varied edge information from an image. By
encoding texture features in four directions, LTrP improves the model's ability to distinguish
between fine-grained textures, which are crucial in complex image datasets where subtle
differences are critical for accurate retrieval. This makes LTrP particularly useful in
applications that deal with diverse image collections, such as medical imaging, remote
sensing, and digital libraries. In terms of practical implementation, the LTrP algorithm uses
first-order derivatives in multiple directions to compute the texture features of an image.
These features are then indexed to form a database, which can be searched efficiently when a
query image is provided. The retrieval process compares the query image with the indexed
images by measuring the similarity of their texture features. The use of four directional
derivatives results in a more comprehensive representation of an image's texture, improving
retrieval performance by reducing the chances of misidentification or incorrect matches,
especially in cases where traditional two-direction methods fall short. The proposed LTrP
method is designed to be computationally efficient, making it suitable for large-scale CBIR
systems where real-time processing and fast retrieval are necessary. The increased accuracy
provided by LTrP does not come at the cost of processing time, as the method can be
implemented with reasonable computational complexity. This allows it to scale effectively
when working with large image databases, which is a crucial factor in real-world
applications. Whether in online image search engines, medical image repositories, or digital
content management systems, the efficiency of LTrP ensures that users can retrieve relevant
images quickly and accurately. The versatility of LTrP makes it a valuable addition to various
domains beyond traditional image retrieval systems. Its ability to handle a wide variety of
textures makes it ideal for applications where image content can vary significantly, such as in
art and cultural heritage preservation, where images may have intricate patterns and textures.
Moreover, its application in medical imaging for diagnosing diseases from texture patterns in
X-rays, MRIs, or CT scans can significantly improve image analysis. By enhancing the
accuracy and robustness of image retrieval in diverse fields, LTrP holds the potential to
revolutionize how we search, analyze, and interpret large image datasets across various
industries. The adaptability of LTrP to different types of images also opens up new
possibilities for multimodal retrieval systems, where images are retrieved based on a
combination of visual and textual queries. For instance, in a multimedia database, where
images are annotated with keywords or descriptions, LTrP can be used alongside traditional
metadata-based search methods to provide more accurate results. By focusing on the texture
and edge relationships that define the image content, LTrP allows for more granular searches
that can capture visual similarities that are not immediately obvious from metadata alone.
This creates a more robust retrieval system that can be used in fields such as online retail,
where users may search for visually similar products, or in e-commerce platforms where
product recognition and comparison based on texture and pattern are vital for providing a
comprehensive shopping experience. The potential of LTrP in machine learning and artificial
intelligence (AI) applications can further enhance its impact. By integrating LTrP-based
feature extraction into AI systems, it becomes possible to improve image recognition tasks,
such as object detection, facial recognition, and scene understanding. The rich texture
information encoded by LTrP can be used to train more accurate machine learning models,
improving their ability to classify and identify objects in images, even in cases where
traditional methods struggle. This can be particularly beneficial for autonomous systems, like
self-driving cars, or surveillance systems, where understanding fine-grained visual details is
crucial for ensuring accurate and safe decision-making. As AI technologies continue to
advance, the incorporation of advanced texture analysis techniques like LTrP will likely play
a central role in enhancing the capabilities of visual recognition systems.
An Illumination Invariant Texture Based Face Recognition (2013): Automatic face
recognition remains an interesting but challenging computer vision open problem. Poor
illumination is considered as one of the major issue, since illumination changes cause large
variation in the facial features. To resolve this, illumination normalization preprocessing
techniques are employed in this paper to enhance the face recognition rate. The methods such
as Histogram Equalization (HE), Gamma Intensity Correction (GIC), Normalization chain
and Modified Homomorphic Filtering (MHF) are used for preprocessing. Owing to great
success, the texture features are commonly used for face recognition. But these features are
severely affected by lighting changes. Hence texture based models Local Binary Pattern
(LBP), Local Derivative Pattern (LDP), Local Texture Pattern (LTP) and Local Tetra Patterns
(LTrPs) are experimented under different lighting conditions. In this paper, illumination
invariant face recognition technique is developed based on the fusion of illumination
preprocessing with local texture descriptors. The performance has been evaluated using
YALE B and CMU-PIE databases containing more than 1500 images. The results
demonstrate that MHF based normalization gives significant improvement in recognition rate
for the face images with large illumination conditions.
Automatic face recognition system has been an active and popular research topic in computer
vision and pattern recognition due to its wide applications in security, forensic investigation,
access control and law enforcement [1]. Existing face recognition method is mainly classified
into appearance (holistic) based method and feature-based method [2]. In holistic method the
entire face image is represented as a high dimensional vector. Due to curse of dimensionality
such vectors cannot be compared directly. Hence holistic methods use dimensionality
reduction techniques to resolve these problems. Examples of this approach are Principal
Component Analysis (PCA), Linear Discriminant Analysis (LDA), Independent Component
Analysis (ICA) and Support Vector Machine (SVM) methods, Laplacian Focus and so on.
Feature based approaches uses a set of observations obtained from the face image. Some of
the well known feature based methods are Elastic Bunch Graph Method (EBGM), Local
Binary Pattern (LBP), Gaussian Mixture model and Hidden Markov Model (HMM). As
compared to holistic approaches, feature based methods have several advantages. They are
robust to variations in pose, illumination, occlusion, expression and localization errors. The
face recognition system has to tolerate the real time challenges due to illumination changes,
expression, pose, partial occlusion, ageing and so on. An illumination change is considered as
a very crucial factor for face recognition. Several illumination preprocessing methods has
been proposed [3] to handle the lighting variations. Among that illumination normalization
has strained much attention due to its simplicity and fidelity. Hence, in this paper four
admired illumination normalization methods are combined with texture descriptors for face
recognition. Illumination normalization techniques play a crucial role in mitigating the impact
of lighting variations on face recognition accuracy. Among the various preprocessing
methods, Histogram Equalization (HE) has been widely used for enhancing image contrast,
making features more distinguishable under varying lighting conditions. Similarly, Gamma
Intensity Correction (GIC) aims to adjust the overall brightness of the image, improving the
visibility of facial features under different lighting setups. The Modified Homomorphic
Filtering (MHF), which is also explored in this paper, enhances the face recognition process
by adjusting both the illumination and reflectance components of the image. This technique
helps to normalize lighting variations, leading to better recognition performance when the
illumination conditions are inconsistent. The fusion of these preprocessing techniques with
texture-based descriptors, such as Local Binary Pattern (LBP), Local Derivative Pattern
(LDP), Local Texture Pattern (LTP), and Local Tetra Patterns (LTrPs), improves the
robustness of face recognition systems in diverse and uncontrolled environments. The
combination of illumination normalization and advanced texture descriptors significantly
enhances the performance of face recognition systems, especially in real-time applications.
The proposed method, which integrates preprocessing techniques like MHF with LTrP-based
feature extraction, demonstrates improved performance in scenarios where traditional face
recognition techniques struggle due to severe lighting conditions. This fusion approach
ensures that the texture features remain invariant to illumination changes, providing reliable
recognition across different lighting conditions. The effectiveness of the proposed method is
validated using well-known face datasets, such as the YALE B and CMU-PIE databases,
which contain a wide range of facial images captured under various lighting scenarios. The
results show that the proposed illumination-invariant face recognition technique outperforms
other methods, offering a robust solution to the challenges posed by lighting variations. In
addition to the improvements in face recognition performance, the fusion of illumination
normalization and texture descriptors enables the development of more efficient real-time
systems. The integration of preprocessing techniques reduces the computational burden by
normalizing lighting conditions before feature extraction. This allows for faster processing
times, which is essential for applications requiring immediate results, such as security
systems, surveillance, and biometric authentication. The method's ability to handle
challenging conditions such as varying lighting, expressions, and occlusions makes it highly
suitable for deployment in real-world environments. Furthermore, this approach opens up
possibilities for integrating face recognition systems with other security measures, such as
motion detection or multi-modal biometrics, to create more comprehensive and secure
identification solutions. Looking ahead, the proposed technique also has significant potential
for applications beyond face recognition, particularly in fields such as emotion recognition,
age estimation, and gesture-based authentication. By improving the robustness of texture-
based models against illumination changes, the fusion of illumination normalization with
LTrP could be extended to various other areas of human-computer interaction. Additionally,
the combination of these methods could be applied in surveillance systems, where
environmental factors like lighting fluctuations are common. Future work could focus on
further optimizing the preprocessing steps and exploring how the approach can be adapted to
handle dynamic lighting conditions in more complex environments, such as outdoor or low-
light situations. Moreover, advancements in machine learning algorithms could further
enhance the system's ability to generalize across different face datasets, improving its
performance in diverse real-world applications.
Content-Based Image Retrieval using Enhanced Local Tetra Pattern (2018): The content
based image retrieval system pays attention to color, texture, pattern, shape, faces, etc. In this
paper the pattern is taken as the key feature for image retrieval. The standard Local Binary
Pattern (LBP) and Local Ternary Pattern (LTP) encode the relationship between the
referenced pixel and its surrounding neighbors by computing gray-level difference. The Local
Derivative Pattern (LDP) encodes the relationship between the (n-1) order derivatives of the
center pixel and its neighbors separately. Local Tetra Pattern (LTrP) represents the image by
the directional information. It determines the relationship in terms of the intensity and
directional information between the referenced pixels and their neighbors. As the directions
used are only four, the image is encoded using only four distinct values. In this paper, eight
directions are used for increasing the effectiveness of image retrieval. The Enhanced LTrP
(ELTrP) takes in to account the horizontal, vertical and the diagonal pixels for derivative
calculation thereby improving the effectiveness of the image retrieval. The effectiveness of
the image retrieval is measured in terms of Average Retrieval Rate (ARR). The ARR with the
proposed ELTrP is increased up to 12% when compared with the existing patterns used for
image retrieval.
In recent years, there has been a massive increase of digital libraries due to the ease of
availability of web / digital cameras, portable devices and mobile phones equipped with
cameras. This makes the task of database management as tedious. Thus there is a need for
automatic database management, one of which is content based image retrieval (CBIR). The
primary step in CBIR is to extract the features of the images. The features may be color,
texture, shape, faces, etc. A detailed study on CBIR is presented in [1] [4]. Of all the features,
texture has been used extensively as they extract the prominent features. This is being
introduced by Moghaddam et al. using wavelet correlogram [5], [6]. Also, it is shown that the
performance can be further improved by optimizing the quantization thresholds [7] for CBIR
application. Texture based image retrieval is widely used in manufacturing industries as it is
suited for product identification. Other researches in CBIR include discrete wavelet transform
(DWT) for texture classification [8] by Ahmadian et al., by using generalized Gaussian
density with KullbackLeibler distance for texture image retrieval [9]. Since, the DWT is
limited to three directions (horizontal, vertical, and diagonal), transforms such as Gabor
transform (GT) [10], and rotated wavelet filters [11] are introduced. Other transforms for
texture image retrieval include dual-tree complex wavelet filters (DT-CWFs), DT rotated
CWFs [12], and rotational invariant complex wavelet filters [13]. In recent years, the
advancements in Content-Based Image Retrieval (CBIR) have been driven by the
development of more sophisticated feature extraction techniques. Traditional methods such as
Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) have laid the foundation for
texture-based image retrieval, but their limitations in capturing finer details and handling
complex textures led to the development of more advanced techniques like Local Derivative
Pattern (LDP) and Local Tetra Pattern (LTrP). These methods, while effective in encoding
pixel relationships, were initially restricted to only four directional codes, which limited their
ability to fully capture the complexity of textures in images. The introduction of Enhanced
Local Tetra Pattern (ELTrP) overcomes this limitation by incorporating eight directional
codes for encoding the intensity and directional information of the referenced pixels and their
neighbors, significantly improving the retrieval accuracy. The effectiveness of ELTrP has
been demonstrated through various image retrieval tests. When compared with standard LBP,
LTP, and LTrP, the Enhanced Local Tetra Pattern significantly increases the Average
Retrieval Rate (ARR). This improvement can be attributed to the method's ability to account
for a more comprehensive set of directional information, which enables it to capture more
nuanced texture features that are otherwise overlooked by simpler methods. By considering
both horizontal and vertical directions as well as diagonal orientations, ELTrP provides a
more detailed representation of an image's texture, leading to more accurate image retrieval,
particularly in cases where the textures are complex and subtle. As a result, ELTrP has
proven to be a valuable enhancement in the field of CBIR, offering a substantial boost in
performance. The widespread adoption of mobile devices, digital cameras, and portable
imaging technologies has led to an exponential increase in the volume of digital images,
making effective image retrieval techniques essential for managing large image databases. In
this context, CBIR systems are becoming more important, particularly in applications such as
medical imaging, remote sensing, and digital libraries. By using texture-based features for
image retrieval, ELTrP has found applications in various industries, including security,
manufacturing, and even cultural heritage preservation. In these sectors, the ability to
accurately retrieve images based on texture information allows for more efficient and precise
data management, whether it's identifying specific objects, comparing similar textures in
product designs, or cataloging artworks. Looking forward, the next step in advancing CBIR
using ELTrP could involve combining it with other advanced techniques, such as deep
learning-based models, to further enhance retrieval performance. While ELTrP has proven
effective in improving image retrieval accuracy by enhancing texture feature extraction, it
can potentially be complemented by neural networks or convolutional neural networks
(CNNs) to learn more complex and abstract representations of image features. Furthermore,
future research could focus on optimizing the computational efficiency of ELTrP, making it
more suitable for real-time image retrieval applications in large-scale databases. As CBIR
continues to evolve, the integration of multiple feature extraction techniques and machine
learning models will likely drive the development of even more powerful and efficient
systems.
Incorporating Color Feature On LBP-Based Image Retrieval (2013): The Local Binary
Pattern (LBP) operator and its variants play an important role as the image feature extractor
in the textural image retrieval and classification. The LBP-based operator extracts the textural
information of an image by considering the neighboring pixel values. However, the LBP-
based feature is not a good candidate in capturing the color information of an image, making
it is less suitable for measuring the similarity of color images with rich color information.
This work overcomes this problem by adding an additional color feature, namely Color
Histogram Feature (CHF), along with the LBP-based feature in the image retrieval domain.
Experimental result shows that the hybrid CHF and LBP-based feature presents a promising
result and outperforms the existing methods over several image databases.
The texture analysis has been intensively developed for pattern recognition and computer
vision applications for its ability on capturing the prominent features. The LBP operator [1] is
the well-known approaches for texture analysis, which has been reported and developed by
many researches. The LBP simply performs the comparison between the center pixel value
(current processed pixel) with the neighboring pixel values in the grayscale space. The LBP
encodes the thresholded bitwise values into a new representation as real-code numbers. The
LBP-based operator only captures the textural information of an image for deriving the
corresponding image descriptor, yet the LBP-based feature has poor performance on
describing the color distribution of an image. The color and texture feature are highly
demanded as an image descriptor for a good color image retrieval task. In this paper, an
additional color feature, namely Color Histogram Feature (CHF), is incorporated with the
LBP-based feature for the image classification and retrieval. As documented in the
experimental results, the two types of the features are proved of compensating each other, and
yield promising performance for image retrieval application. The fusion of CHF and LBP-
based feature can also be incorporated in the motion detection scheme for video surveillance
system [11]. The contrast enhancement algorithm [12] can be applied to the input process
before LBP processing. Thus, the overall performance can be improved for this strategy. In
the domain of image retrieval, texture plays a crucial role in identifying key features that
distinguish one image from another. The Local Binary Pattern (LBP) has been one of the
most successful techniques for texture feature extraction due to its simplicity and
effectiveness in capturing the local texture information. However, LBP operates solely on the
intensity values of the image and is not capable of incorporating color information, which is
crucial for distinguishing images that have similar textures but different color distributions.
To address this limitation, this work proposes the fusion of LBP with Color Histogram
Feature (CHF), a color-based descriptor that captures the global distribution of color within
an image. By combining these two features, the hybrid method improves the retrieval
process, as it now accounts for both the texture and color aspects of the image, which are
essential for a more accurate image classification and retrieval system. Color Histogram
Features (CHF) are a well-established technique in image retrieval because they summarize
the distribution of colors across an image. Unlike LBP, which focuses only on the local
texture patterns, CHF provides a global representation of an image's color composition. The
integration of CHF with LBP compensates for the weaknesses of each method individually.
While LBP is robust to lighting and small variations in texture, it often struggles with color
variations. On the other hand, CHF is excellent for capturing color information but does not
provide any texture details. By combining these two features, the proposed approach offers a
more comprehensive representation of the image, leading to better retrieval performance. The
experimental results show that the hybrid approach outperforms standard methods that rely
on either LBP or CHF alone, particularly in databases with rich color variations. In addition
to improving retrieval performance for static images, the combined LBP and CHF features
can be effectively applied in video surveillance systems for motion detection. In such
systems, it is often necessary to track objects based on both texture and color information,
especially in dynamic environments where lighting and environmental conditions can vary.
By incorporating both texture and color descriptors, the hybrid feature representation enables
more accurate detection and classification of moving objects, even under challenging
conditions such as occlusion or lighting changes. This hybrid feature approach can enhance
video surveillance systems by providing a more reliable and robust method for identifying
and tracking objects in real-time. The fusion of texture and color features can also be
extended to other computer vision applications, such as object recognition, scene
understanding, and image classification. In object recognition, for instance, distinguishing
between objects with similar textures but different colors becomes crucial. The combined
LBP and CHF feature can help improve recognition accuracy by providing a richer set of
descriptors that capture both the fine details of the texture and the broader color context.
Similarly, in image classification tasks, where the goal is to categorize images into predefined
classes, the fusion of texture and color features offers a more robust and discriminative
feature set, improving classification performance, especially in complex image datasets
where both color and texture are important cues.
An Image Removal Using Local TetraPatterns for Content Based Image Retrieval
(2015): Content-based image retrieval is a technique of automatic retrieval of images from
large database that perfectly matches the query image. For the large database, many of the
research works had been undertaken in the past decade to design efficient image retrieval
system. On many fields such as industry, education, biomedical and research the amount of
image data that has to be stored, managed, searched and retrieved grows continuously. In this
paper, we propose a new image retrieval technique for Content-based image retrieval (CBIR)
using local tetra pattern (LTP). The local tetra pattern (LTP) and local binary pattern (LBP)
determines the correlation on grey level difference between referenced pixel and its
surrounding neighbours. The proposed technique encodes the relationship between the
referenced pixel and its neighbours and by via first-order derivatives in vertical and
horizontal directions. The proposed algorithm has been experienced on different real images
and its performance is found to be somewhat acceptable when compared with performance of
conventional technique of content based image retrieval. In terms of average precision and
average recall we calculated the performance of proposed method.
There is a need to develop a proficient technique for automatically retrieved the desired
image from large database. To retrieve the images in database mostly two methods are
common in practice, text based image retrieval and visual based i.e. content based image
retrieval (CBIR).In text based image retrieval systems, images are characterized by text
information such as keywords and captions. Many communities had retrieved the image
using a text based data management system [DBMS] in 1970. In this technique, the user
retrieved the images using keywords and images were stored in database with text annotation.
Various techniques used in text retrieval are Bag of words approach, a technique where in
Stop words can be removed, correction in spelling etc. In text base system different problems
occur such as incorrect spelling, never complete the annotation, same thing can be said in
different ways [1]. It is not possible to retrieved images more precisely in text based image
retrieval system and in case of large database(hundreds of thousands) result became
inaccurate. Content-based image retrieval (CBIR) has become an essential tool in managing
large image databases, particularly in fields like industry, healthcare, and research. Unlike
text-based image retrieval systems, which rely on manual annotations, CBIR systems directly
analyze the content of images. In CBIR, images are represented by features such as texture,
color, and shape, making it easier to retrieve images that are visually similar to the query
image. One of the key challenges in CBIR is efficiently handling large image datasets and
ensuring the accuracy of image retrieval. The proposed method using Local Tetra Pattern
(LTP) provides an effective solution by capturing more detailed texture features through first-
order derivatives in both vertical and horizontal directions, thereby improving the overall
retrieval accuracy compared to traditional methods. The Local Binary Pattern (LBP) has been
widely used in texture-based image retrieval, but it only encodes the relationship between the
center pixel and its neighbors in two directions. While LBP is efficient, it does not fully
capture the complexity of image textures, especially in images with intricate details or subtle
variations. To overcome this limitation, the Local Tetra Pattern (LTP) is introduced. LTP
extends the concept of LBP by considering first-order derivatives in four directions—
horizontal, vertical, and diagonal. This enhancement allows LTP to better capture the
directional texture information, leading to more accurate retrieval results in cases where
traditional LBP would fall short. The proposed method incorporates this extended feature
extraction process, improving the precision and recall of image retrieval systems. Another
advantage of the proposed LTP-based image retrieval technique is its robustness to variations
in lighting and noise. Traditional texture-based methods like LBP may struggle when the
images are subject to changes in illumination or when there is noise present in the image. By
considering additional directional information, LTP enhances the feature extraction process,
making it more resilient to these challenges. The performance of the proposed method is
evaluated using real images, and the results demonstrate that the LTP-based system provides
a significant improvement in terms of average precision and recall, outperforming
conventional techniques. This makes the proposed method a viable solution for applications
where image quality and consistency are not always guaranteed, such as in medical imaging
or remote sensing. The application of LTP in content-based image retrieval systems can be
further expanded to large-scale image databases. As the amount of digital imagery continues
to grow, efficient and scalable retrieval techniques are becoming increasingly important. The
proposed LTP-based retrieval method offers a promising solution for handling large
databases, as it allows for more accurate retrieval even when dealing with diverse and
complex image collections. Future work can explore the integration of LTP with other
advanced techniques, such as deep learning-based feature extraction methods, to further
improve the performance of image retrieval systems. Additionally, combining LTP with other
modalities such as shape, color, and spatial information could provide a more comprehensive
representation of images, leading to even more accurate retrieval results in various domains.
A Framework for Medical Image Retrieval Using Local Tetra Patterns (2013): In
medical field, the digital images used for diagnostics and therapy are produced in ever
increasing quantities. So there is necessity of feature extraction and classification of medical
images for easy and efficient retrieval. In this paper, a framework based on Local Tetra
Pattern and Fourier Descriptor for content based image retrieval from medical databases is
proposed. The proposed approach formulates the relationship between the reference or centre
pixel and its neighbours, considering the vertical and horizontal directions calculated using
the first-order derivatives. The texture feature of an image is of prime concern; the images
filtered by this feature are more appropriate ones as a response to the query image. In this
research work, the association of Euclidean Distance(ED) with local tetra pattern is also
explored. The proposed framework is successfully tested on standard Messidor dataset of
1200 Retinal images which are annotated with Retinopathy and Macular Edema grades. A
tool SS-SVM is applied on binary patterns for endoscopy, dental, skull and retinal images for
classification, which results in better classification of images for various dataset, thus
improving classifiers.
The rapid advancement in medical imaging technology has led to an explosion in the volume
of digital images used for diagnostic purposes, ranging from retinal scans to X-rays and CT
scans. This surge in medical image data necessitates the development of efficient and
effective techniques for feature extraction, classification, and retrieval to ensure that relevant
images can be quickly and accurately located for clinical decision-making. Content-Based
Image Retrieval (CBIR) plays a crucial role in this context, enabling the retrieval of images
based on their visual content rather than relying on metadata. Among various CBIR
techniques, texture-based methods have garnered significant attention due to their ability to
capture subtle patterns and variations in medical images, which are often critical for diagnosis
and treatment planning. However, traditional methods that rely on limited directional
information may fail to capture the full complexity of medical image textures. To address this
challenge, this paper proposes a novel framework for medical image retrieval that combines
Local Tetra Patterns (LTrP) with Fourier Descriptors to enhance texture feature extraction.
By considering relationships between the central pixel and its neighbors in multiple directions
—both vertical and horizontal—the LTrP method provides a richer representation of image
texture, which is essential for accurate retrieval. Furthermore, the integration of Euclidean
Distance (ED) with LTrP enhances the retrieval accuracy by improving the distinction
between images based on their feature characteristics. The proposed framework is tested on
the Messidor dataset, which includes 1200 retinal images annotated with retinopathy and
macular edema grades, demonstrating its effectiveness in accurately classifying and
retrieving medical images. Additionally, the use of the SS-SVM tool for classification on
various datasets, including endoscopy, dental, and skull images, further validates the
robustness of the proposed approach, leading to improved classification performance across
different medical imaging applications. The increasing demand for efficient medical image
retrieval systems is driven by the rapid expansion of medical imaging technologies, including
modalities like MRI, CT scans, and retinal imaging. These imaging techniques generate vast
amounts of data that require sophisticated methods for indexing, retrieval, and analysis. In the
context of medical image retrieval, texture-based methods are particularly effective because
they can capture the subtle variations and patterns in medical images that are often critical for
diagnosis. The Local Tetra Pattern (LTrP) method, with its ability to consider multiple
directions in the image, provides a more detailed representation of textures compared to
traditional methods like Local Binary Pattern (LBP) and Local Ternary Pattern (LTP). By
encoding pixel relationships in four directions—vertical, horizontal, and diagonal—the LTrP
method enhances the accuracy of feature extraction, which is crucial in medical imaging
where the fine details often hold diagnostic significance. Incorporating Fourier Descriptors
with LTrP further improves the framework by capturing both the local and global features of
an image. Fourier Descriptors are known for their ability to represent shapes and contours in
images, making them ideal for analyzing the structural elements of medical images. By
combining LTrP with Fourier Descriptors, the proposed method effectively captures both the
fine texture and the larger structural features of medical images, providing a more
comprehensive representation for image retrieval. This fusion of local and global feature
extraction techniques helps improve the retrieval process, especially in complex medical
image databases, where subtle differences in texture and shape can be crucial for diagnosis.
The integration of Euclidean Distance (ED) with LTrP further strengthens the retrieval
accuracy by improving the classification and differentiation between similar images. The use
of Euclidean Distance as a metric allows for a more precise comparison of feature vectors
extracted from medical images, ensuring that the most relevant images are retrieved in
response to a query. This approach reduces the chances of false positives and ensures that the
retrieved images are the closest matches to the query, which is particularly important in
clinical settings where accurate information is critical. The effectiveness of this combined
approach has been demonstrated through the use of the SS-SVM (Support Vector Machine)
tool, which enhances the classification and retrieval performance across different medical
datasets, such as retinal, dental, and skull images. The proposed framework's success on the
Messidor dataset, containing 1200 retinal images, highlights its potential for broader
applications in various areas of medical imaging. By improving the retrieval accuracy and
classification performance, this framework provides a valuable tool for clinicians and
researchers who need to quickly access relevant medical images for diagnosis and treatment
planning. The framework's versatility and robustness, as demonstrated through its application
to different medical image types, suggest that it could be further extended to other medical
imaging modalities. Future research could focus on enhancing the computational efficiency
of the framework, integrating it with real-time medical imaging systems, and exploring its
use in more complex, multi-modal medical image databases.

Conclusion
This paper investigates accuracy, error rate, Precision Rate, Recall Rate, and F-Score of using
LDTrP for face recognition. Local derivative patterns are used to capture higher-order local
derivative variations. To model the distribution of micropatterns, the histogram intersection is
used as similarity measurement. The experiments conducted on the database ORL and JAFFE
demonstrate, the proposed higher-order LDTrP achieves better performance than LBP, LDP,
LTrP and LTP. Due to the effectiveness of the proposed method, it can be also be used in
colour images. The proposed method has focused on higher accuracy rate. But, it can also be
improved to reduce space and time consumption. The results of this study indicate that the
higher-order Local Derivative Tetra Pattern (LDTrP) method outperforms traditional texture-
based methods such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP), Local
Tetra Pattern (LTrP), and Local Ternary Pattern (LTP) for face recognition tasks. The
improvements in accuracy, precision, recall, and F-score demonstrate the effectiveness of the
LDTrP approach in capturing fine-grained texture variations that are essential for
distinguishing facial features. This suggests that LDTrP is particularly suitable for
applications where facial recognition accuracy is critical, such as security systems, access
control, and forensic analysis. The flexibility of the LDTrP method to be applied to color
images broadens its potential applications beyond grayscale images. Since color contains
important information for distinguishing facial features, integrating color data with the
LDTrP framework allows for a more comprehensive representation of facial textures. This
can further enhance the performance of face recognition systems, especially in real-world
scenarios where variations in lighting and color can pose significant challenges for traditional
methods. While the proposed LDTrP method demonstrates promising results, there is room
for improvement in terms of computational efficiency. Reducing the space and time
complexity of the algorithm could make it more suitable for real-time applications and large-
scale databases. Future work could focus on optimizing the algorithm to reduce the
computational burden while maintaining or improving its accuracy. Exploring parallel
processing techniques or dimensionality reduction strategies could be potential avenues for
enhancing the efficiency of LDTrP-based face recognition systems, making them more
practical for deployment in resource-constrained environments.

References
[1] T. Ahonen, A. Hadid and M. Pietikainen, “Face Description with Local Binary Patterns:
Application to Face Recognition”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 28, No. 12, pp. 2037-2041, 2006.
[2] M. Turk and A. Pentland, “Eigen Faces for Recognition”, Journal of Cognitive
Neuroscience, Vol. 3, No. 1, pp. 71-86, 1991.
[3] P. Belhumeur, J. Hespanha and D. Kriegman, “Eigenfaces vs. Fisherfaces: Rcoginition
using Class Specific Linear Projection”, IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 19, No. 7, pp. 711-720, 1997.
[4] Baochang Zhang, Yongsheng Gao, Sanqiang Zhao and Jianzhuang Liu, “Local Derivative
Pattern Verses Local Binary Pattern: Face Recognition with High-order Local Pattern
Descriptor”, IEEE Transactions on Image Processing, Vol. 19, No.2, pp. 533-544, 2010.
[5] W. Zhao, R. Chellappa P.J. Phillips and A. Rosenfeld, “Face Recognition: A Literature
Survey”, ACM Computing Surveys, Vol. 35, No. 4, pp. 399-459, 2003.
[6] R.Chellapa, C.L.Wilson and D.JKriegman, “Eigenfaces vs. Fisherfaces A Survey”,
Proceedings of IEEE, Vol. 83, No. 5, pp. 705-740, 1995.
[7] Hongming Zhang, Wen Gao, Xilin Chen and Debin Zhao, “Learning Informative Features
for Spatial Histogram based Object Detection”, Proceedings of IEEE International Joint
Conference on Neural Networks, Vol. 3, pp. 1806-1811, 2005.
[8] S. Murala, R.P. Maheshwari and R. Balasubramanian, “Local Tetra Patterns: A New
Feature Descriptor for Content-based Image Retrieval System”, IEEE Transactions on Image
Processing, Vol. 21, No. 5, pp. 2874-2886, 2012.
[9] The Database of Faces, Available at,
http://www.cl.cam.ac.uk/Research/DTG/attarchive:pub/dat a/att_faces
[10] Yong Rui, Thomas S. Huang and Shih-Fu Chang, “Image Retrieval: Current Techniques,
Promising Directions and Open Issues”, Journal of Visual Communication and Image
Representation, Vol. 10, No. 1, pp. 39-62, 1999.
[11] Xiaoyang Tan and Bill Triggs, “Enhanced Local Texture Feature sets for Face
Recognition under Difficult Lighting Conditions”, Proceedings of 3rd International
Workshop, Analysis and Modeling of Faces and Gestures, pp. 1635- 1650, 2010.
[12] Timo Ahonen, Abdenour Hadid and Matti Pietikainen, “Face Description with Local
Binary Patterns: Application to Face Recognition”, IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 28, No. 12, pp. 2037-2041, 2006.
[13] T. Ojala, M. Pietikainen and D. Harwood, “A Comparative Study of Texture Measures
with Classification based on Feature Distributions”, Pattern Recognition, Vol. 29, No. 1, pp.
51-59, 1996.
[14] Wen-Hung Liao and Ting-Jung Young, “Texture Classification using Uniform Extended
Local Ternary Patterns”, Proceedings of IEEE International Symposium on Multimedia, pp.
191-195, 2010.
[15] K. Thangadurai, S. Bhuvana and R. Radhakrishnan, “An Improved Local Tetra Pattern
for Content based Image Retrieval”, Journal of Global Research in Computer Science, Vol. 4,
No. 4, pp. 37-42,2013.
[16] JAFFE images, Available at: http://www.kasrl.org/ jaffe_info.html

You might also like