KEMBAR78
Digital Image Properties and Color Images | PDF | Color | Rgb Color Model
0% found this document useful (0 votes)
104 views44 pages

Digital Image Properties and Color Images

Uploaded by

22d135
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views44 pages

Digital Image Properties and Color Images

Uploaded by

22d135
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Digital Image

Properties and Color


Images
Digital Image Properties
1. Metrics and topological properties
2. Histograms
3. Entropy
4. Visual Perception of the Image
5. Image Quality
6. Noise in Images
Metrics and Topological Properties
• Metrics in digital imaging refer to measurable properties that help
quantify different aspects of an image. Some common metrics
include:
• Mean and Standard Deviation: Mean represents the average
intensity of the pixels in the image, while standard deviation
measures the contrast or variation in pixel intensities.
• Signal-to-Noise Ratio (SNR): This metric measures the level of
signal compared to the noise in the image, indicating the image
quality.
• Peak Signal-to-Noise Ratio (PSNR): It compares the maximum
possible signal power of an image to the noise, providing a
measure of the image's fidelity compared to a reference.
Metrics and Topological Properties
• Topological Properties refer to the spatial and structural aspects of an
image, such as:

• Connected Components: These are groups of pixels with the same


or similar properties, forming distinct objects within the image.
• Euler Number: This describes the topology of binary images by
considering the number of objects and the number of holes in
those objects.
• Contours and Edges: These are the boundaries that define the
shape and structure of objects within an image.
Histograms
• A histogram is a graphical representation of the distribution of pixel
intensities in an image. It shows the frequency of each intensity value
(usually ranging from 0 to 255 for an 8-bit grayscale image).
Histograms are useful for:
• Assessing Image Contrast: A wide histogram spread indicates high
contrast, while a narrow spread indicates low contrast.
• Identifying Image Brightness: The histogram's peak shows the
most common intensity, which can suggest whether an image is
mostly bright or dark.
• Image Equalization: Histogram equalization is a technique to
enhance contrast by redistributing pixel intensities across the
available range.
Entropy
• Entropy in the context of digital images refers to the measure of
randomness or complexity within the image. It is calculated using the
Shannon entropy formula:

• Applications: Entropy is often used in image compression and texture


analysis. Higher entropy indicates more information content, while
lower entropy indicates less.
Visual perception of the image
• Visual perception is the process by which the brain interprets and
makes sense of visual information received from the eyes. This
process is influenced by various factors such as contrast, acuity, visual
illusions, and perceptual grouping.

Contrast
• Contrast refers to the difference in luminance or color that makes an
object distinguishable from other objects and the background.
• In terms of image perception, higher contrast usually makes it easier
for viewers to identify objects, details, and edges.
Visual perception of the image
• Types of Contrast:
• Luminance Contrast: The difference in brightness between the
object and its background. For example, a white object on a black
background has high luminance contrast.
• Color Contrast: The difference in color between objects, which can
help in distinguishing between different areas in an image, even if
the luminance contrast is low.

• Effect on Perception: High contrast enhances the perception of edges


and details, making objects more easily identifiable. Low contrast can
make objects blend into the background, reducing visibility.
Visual perception of the image
• Acuity
• Visual acuity refers to the sharpness or clarity of vision, which
allows the viewer to perceive fine details. Acuity is often measured
by the ability to discern letters or numbers at a standardized
distance, such as on a Snellen eye chart.

• Factors Influencing Acuity:


• Retinal Resolution: The density of photoreceptor cells in the retina
affects how well details are resolved.
Visual perception of the image
• Contrast Sensitivity: The ability to detect contrasts is directly
related to visual acuity.
• Lighting Conditions: Acuity decreases in low light conditions
(scotopic vision) as opposed to well-lit conditions (photopic
vision).
• Importance in Image Perception: Higher acuity allows for better
differentiation of small details, which is crucial in activities like reading
or recognizing faces. Reduced acuity can blur details and make it
difficult to distinguish between similar objects.
Visual perception of the image
Some Visual Illusions
Visual illusions occur when there is a discrepancy between the physical
reality and the perception of an image. These illusions exploit the brain's
mechanisms for processing visual information, leading to misinterpretations.
Visual perception of the image
• Visual illusions highlight how the brain fills in gaps, interprets
patterns, and makes assumptions based on context. They reveal the
cognitive processes underlying visual perception and how easily our
visual system can be tricked.
Visual perception of the image
• Perceptual Grouping
• Perceptual grouping refers to the process by which the visual system
organizes elements in the visual field into coherent groups or patterns. This
concept is based on principles outlined by Gestalt psychology, which suggests
that the brain tends to organize visual elements into wholes rather than just a
collection of parts
• Principles of Perceptual Grouping
• Proximity: Objects that are close to each other are perceived as belonging
together.
• Similarity: Objects that are similar in color, shape, or size are perceived as
part of a group.
Visual perception of the image
• Continuity: The brain tends to follow lines or curves, perceiving them as
continuous patterns.
• Closure: The mind fills in gaps to perceive complete shapes even when parts
are missing.
• Connectedness: Elements that are physically connected are perceived as a
single unit.
• Perceptual grouping allows for efficient and meaningful
interpretation of complex scenes by organizing visual input into
structured, recognizable forms. This makes it easier to process and
understand images by focusing on grouped elements rather than
isolated details.
Image Quality
• Image Quality refers to how well an image represents the original
scene or object. It can be subjective (based on human perception) or
objective (based on measurable metrics).
• Common factors influencing image quality include:
• Resolution: Higher resolution generally means better quality as
more detail is captured.
• Contrast and Brightness: Proper contrast and brightness levels
ensure that the image is neither too dark nor too washed out.
• Compression Artifacts: Lossy compression can introduce artifacts
that degrade image quality.
• Blurring: Caused by motion, out-of-focus optics, or low-pass
filtering, blurring reduces image sharpness and clarity.
Noise in images
• Noise refers to unwanted random variations in pixel intensities that
obscure the true image content.
• Types of noise include:
• Gaussian Noise: Random noise with a normal distribution, often
caused by electronic fluctuations in sensors.
• Salt-and-Pepper Noise: Random occurrences of black and white
pixels, usually caused by transmission errors or faulty pixel sensors.
• Poisson Noise: Noise that follows a Poisson distribution, often
associated with photon counting in low-light imaging.
• Reducing Noise: Techniques like filtering (mean, median, Gaussian)
and advanced algorithms (wavelet-based, non-local means) are used
to reduce noise while preserving image details.
Generation of additive, zero
mean Gaussian noise
Step 1: Initialization

The algorithm begins by assuming that the image has a gray-level range
[0,G−1], where G represents the maximum possible intensity level in the image (for
an 8-bit image, G=256).

A parameter σ is chosen, representing the standard deviation of the


Gaussian noise. The variance of the noise will be . Smaller values of σ will generate
less noticeable noise.
Generation of additive, zero
mean Gaussian noise
Step 2: Generate Random Numbers
For each pair of horizontally neighboring pixels (x,y) and (x,y+1),
generate two independent random numbers r and φ, both uniformly
distributed in the range [0,1].
Generation of additive, zero
mean Gaussian noise
Step 3: Calculate Gaussian Noise Using the Box-Muller Transform
Generation of additive, zero
mean Gaussian noise
Step 4: Add Gaussian Noise to Pixel Values
Generation of additive, zero
mean Gaussian noise
Step 5: Handle Overflow and Underflow
Generation of additive, zero
mean Gaussian noise
Step 6: Repeat for All Pixels
The algorithm repeats steps 2 to 5 for all pairs of horizontally
neighboring pixels in the image until every pixel has been processed.
Color Images
• Physics of Color
• Color Perceived by Humans
• Color Spaces
• Palette Images
• Color Constancy
Physics of color
1. Nature of Light and Electromagnetic Spectrum
• Light as Electromagnetic Waves: Light is part of the electromagnetic
spectrum, which includes a range of waves from gamma rays to radio waves.
Visible light is the portion of the electromagnetic spectrum that the human
eye can detect, typically within the wavelength range of about 400 to 700
nanometers (nm).
Physics of color
Wavelengths and Colors: The different wavelengths of light correspond
to different colors. For instance: Violet: ~400 nm
• Blue: ~450 nm
• Green: ~500 nm
• Yellow: ~570 nm
• Orange: ~590 nm
• Red: ~650 nm and above
Physics of color
2. Interaction of Light with Matter:
• Reflection: When light hits an object, some of it is reflected off the surface.
The color of an object is determined by the wavelengths of light that are
reflected. For example, a red apple appears red because it reflects red
wavelengths and absorbs other wavelengths.

• Absorption: Objects absorb certain wavelengths of light, and this absorbed


energy can be converted into heat or other forms of energy. The absorbed
wavelengths are what give objects their color by subtracting them from the
spectrum of light that reaches our eyes.
Physics of color
• Transmission and Refraction: Some materials allow light to pass through
them, which can bend or refract light. This bending of light can separate it
into its constituent colors, as seen in a prism. For example, when white light
passes through a prism, it disperses into a spectrum of colors, creating a
rainbow effect.

• Scattering: Light can scatter when it interacts with small particles in a


medium. Rayleigh scattering, which is more effective for shorter wavelengths
like blue, explains why the sky appears blue during the day and red during
sunrise or sunset.
Color perceived by humans
1. Mechanism of Color Perception
• Color perception begins with light entering the eye and being focused onto
the retina, where it is detected by photoreceptor cells called cones. Humans
typically have three types of cones, each sensitive to different wavelengths of
light:
• S-cones: Sensitive to short wavelengths (blue light).
• M-cones: Sensitive to medium wavelengths (green light).
• L-cones: Sensitive to long wavelengths (red light).
• When light of different wavelengths strikes the retina, these cones respond
with varying levels of activation, and the brain interprets the combination of
signals as different colors. The perception of color is therefore a result of the
brain processing the relative activation levels of these three types of cones.
Color perceived by humans
2. Factors Influencing Color Perception
• Lighting Conditions: The color of light illuminating an object can alter its
perceived color. For example, an object might appear differently under
daylight versus artificial lighting due to the light's spectral composition.

• Surrounding Colors: The colors around an object can influence how its color is
perceived, a phenomenon known as color contrast.

• Color Constancy: Despite changes in lighting, the brain tends to maintain the
perceived color of objects as relatively constant, allowing us to recognize
familiar objects regardless of the lighting conditions.

• Individual Variations: Factors such as color blindness, where one or more


types of cones are missing or function differently, can alter color perception.
Color Spaces
1. Definition and Purpose of Color Spaces
• A color space is a specific organization of colors, defined mathematically, that
allows for the consistent reproduction of color across different devices and
media.
• Color spaces provide a framework for how colors can be represented and
manipulated in digital and physical formats.
• The primary purpose of a color space is to ensure that colors are interpreted
correctly, whether they are displayed on screens, printed on paper, or used in
various applications such as photography, video production, and graphic
design.
Color Spaces
2. Common Color Spaces
• RGB (Red, Green, Blue): The RGB color space is widely used for digital
screens, such as those in computers, televisions, and cameras.
• In RGB, colors are created by combining red, green, and blue light at
various intensities.
• Each color is typically represented by a set of three values
corresponding to the intensity of red, green, and blue, ranging from 0
to 255 in an 8-bit system.
• RGB is an additive color model, meaning colors are added together to
create new ones, with white being the combination of all three
primary colors at full intensity.
Color Spaces
2. Common Color Spaces
• CMYK (Cyan, Magenta, Yellow, Key/Black): CMYK is a subtractive
color space commonly used in color printing.
• In this model, colors are created by subtracting varying amounts of
cyan, magenta, yellow, and black from white light.
• Subtractive mixing works by removing (subtracting) certain
wavelengths of light, leaving others to be reflected, which results in
the perceived color.
• Unlike RGB, where adding colors results in white, adding all the colors
in CMYK results in black (or dark brown, which is why black is added
as a separate key color).
Color Spaces
2. Common Color Spaces
• HSV/HSI (Hue, Saturation, Value/Intensity): The HSV or HSI color
space is designed to be more intuitive for human perception,
representing colors in terms of their hue (the type of color),
saturation (the vividness of the color), and value or intensity (the
brightness of the color).
• HSV is particularly useful in applications where color manipulation
needs to be more intuitive, such as in graphic design and image
editing.
• It allows users to easily adjust colors based on their characteristics
rather than their specific RGB values.
Color Spaces
3. Applications and Significance of Color Spaces
• Color spaces are crucial in various fields for accurate color
reproduction and manipulation.
• In digital imaging and photography, RGB is standard for displaying
images on screens, while printers use CMYK to accurately reproduce
colors on paper.
• Graphic designers often use the HSV color space to tweak colors
more intuitively when working in software like Adobe Photoshop.
Color Spaces
3. Applications and Significance of Color Spaces
• In television and film, color spaces like YUV or YCbCr are used to
separate brightness from color information, optimizing the
transmission and display of video content.
• Color management systems rely on standardized color spaces like
sRGB and Adobe RGB to ensure consistent color across different
devices and media, ensuring that what is seen on a monitor matches
what is printed or displayed elsewhere.
Palette images
• Palette images are a type of digital image that uses a limited set of
colors to represent the entire image.
• Instead of storing the color information for each pixel directly, palette
images use a palette or color table that lists all the colors used in the
image.
• Each pixel in the image then references an index in this palette, which
significantly reduces the amount of data needed to represent the
image.
Palette images
Applications and Advantages
• Graphics and Web Design: Palette images are commonly used in graphic
design, especially for web graphics like icons, buttons, and logos, where a
limited color palette is sufficient. Formats like GIF (Graphics Interchange
Format) are popular examples that use palette images, allowing for small file
sizes and efficient transmission over the internet.

• Efficient Compression: Because the image is represented using a limited


palette, palette images can be highly compressed. This makes them ideal for
situations where file size is critical, such as in older video games, mobile
applications, or low-bandwidth environments.
Palette images
Applications and Advantages
• Animation: Palette images are also used in animations, where the same set of
colors is reused across multiple frames. The ability to change the palette
dynamically (palette swapping) without changing the underlying image data is
a technique often used in older video games and early computer graphics.
Color Consistency
• Color consistency refers to the phenomenon where the perceived
color of an object remains relatively constant under varying lighting
conditions.
• This ability of the human visual system to perceive consistent colors
despite changes in the illumination is critical for accurately identifying
and interacting with objects in different environments.

You might also like