Dip CCP Report
Dip CCP Report
Maryam Asad, Abdul Ahad Iqbal, Zara, Muhammad Anees Akram Sindhu
Department of Computer Science, University of Management and Technology Lahore, 54770 Pakistan
Abstract
Smart city surveillance systems face critical challenges such as poor image quality caused by noise, motion
blur, and varying lighting conditions, which hinder reliable real-time monitoring and object recognition.
This project addresses these issues using classical digital image processing techniques without relying
on artificial intelligence, ensuring predictable and resource-efficient performance suitable for hardware-
constrained environments. The methodology incorporates several key stages: filtering, employing median
and Gaussian filters to reduce noise while preserving important image features; restoration, using Wiener
filtering and deblurring algorithms to recover images degraded by motion blur and environmental factors;
segmentation, utilizing edge detection methods like the Canny operator and region-based approaches such
as Watershed to isolate objects of interest; and object recognition, based on feature extraction of shape,
texture, and color for accurate identification. Results demonstrate significant improvements in image
clarity, noise reduction, and segmentation accuracy, enabling robust object detection under challenging
urban conditions. The optimized, lightweight algorithms balance computational demands and processing
speed, making the system viable for real-time deployment in surveillance networks. Enhanced image
quality and reliable object recognition facilitate timely and accurate situational awareness, crucial for
public safety and urban management.This approach’s practical implications include improved surveillance
efficacy, better resource utilization, and scalability in dynamic city environments, all while maintaining
transparency and avoiding ethical concerns associated with AI-based methods.
keywords: Smart City Surveillance, Digital Image Processing, Noise Reduction, Image Restoration,
Segmentation, Object Recognition, Real-Time Monitoring.
Unlike artificial intelligence-based approaches that Low-resolution images lack the fine-grained infor-
require large datasets and extensive computing mation necessary to distinguish closely spaced ob-
power, classical digital image processing techniques jects, identify faces, or read vehicle license plates.
employ deterministic algorithms—such as filtering, Upscaling or super-resolution techniques can par-
restoration, segmentation, and feature extraction tially mitigate this issue but often introduce ar-
that are mathematically precise and computation- tifacts or inaccuracies, especially in real-time set-
ally efficient. These methods ensure the system tings.
remains reliable, transparent, and manageable, es- Balancing the need for high-resolution images with
pecially when deployed on hardware with limited practical constraints such as bandwidth, storage
resources. capacity, and computational power is a constant
In sum, digital image processing is the backbone of challenge in smart city surveillance.
modern smart city surveillance, enabling the con-
version of imperfect raw visual data into actionable 3.3. Real-Time Processing Constraints
intelligence that keeps cities safer and better man- A defining feature of smart city surveillance is real-
aged. time monitoring—the ability to process and an-
alyze image data instantly or within a few sec-
onds. Rapid processing allows authorities to re-
3. Challenges in Digital Image Processing
spond promptly to incidents, potentially prevent-
for Smart City Surveillance
ing accidents or crimes.
Despite its importance, applying digital image pro- However, many digital image processing algorithms,
cessing in smart city surveillance faces several sig- especially those involving complex restoration or
nificant challenges that affect the accuracy, speed, segmentation tasks, are computationally intensive.
and reliability of the system. Performing these operations on high-resolution im-
ages at video frame rates requires substantial pro-
3.1. Noise and Environmental Distortions cessing power and optimized algorithms.
One of the most pervasive challenges is noise in the
Moreover, surveillance systems often rely on resource-
captured images. Noise refers to unwanted random
constrained hardware such as embedded processors
variations in pixel intensity that degrade the image
or edge devices that do not support heavy com-
quality. It can originate from various sources:
putations. This limitation forces designers to op-
• Low light conditions: During nighttime timize algorithms to maintain a balance between
or poorly lit areas, cameras often struggle to image quality and processing speed.
2
4. Conflicting Requirements: 5. Objectives
The implementation of digital image processing in The main goals of this project focus on enhancing
smart city surveillance frequently involves balanc- the effectiveness of smart city surveillance systems
ing conflicting requirements: through advanced digital image processing tech-
niques. Specifically, the objectives include:
• High image quality vs. computational
load: Enhancing image quality using advanced 1. Improving Image Quality
filters and restoration techniques improves Enhance the clarity and usability of surveil-
detection accuracy but increases processing lance images by reducing noise and correct-
time and energy consumption. ing distortions, ensuring that the footage is
reliable even under difficult conditions like
• Data fidelity vs. storage/transmission
low light and environmental interference.
efficiency: Capturing and storing high-quality
images consumes significant bandwidth and 2. Frequency Domain Filtering
storage space. Compression algorithms re- Utilize Fourier transform-based methods to
duce size but may discard important details analyze and remove periodic noise from surveil-
needed for analysis. lance footage. This involves designing and
applying frequency-domain filters such as Gaus-
• Complexity vs. transparency: Complex sian low-pass and Butterworth filters, which
image processing algorithms may yield better are critical not only for noise reduction but
results but can be harder to debug, maintain, also for improving image restoration and com-
and explain to stakeholders, especially when pression outcomes.
transparency and reliability are essential. 3. Image Restoration and Reconstruction
• Robustness vs. adaptability: Systems Apply restoration techniques like Wiener fil-
must perform consistently in varying envi- tering and deblurring algorithms to recover
ronmental conditions yet remain adaptable corrupted or degraded images. This process
to dynamic changes in lighting, weather, or addresses real-world challenges such as mo-
urban activity. tion blur and environmental degradation, en-
suring the images are suitable for further anal-
Designing a surveillance system that satisfactorily ysis.
addresses these conflicting demands requires care- 4. Image Segmentation
ful algorithm selection, hardware considerations, Evaluate and implement edge-based meth-
and system architecture design. ods (e.g., Canny edge detection) and region-
based techniques (e.g., watershed segmenta-
4.0.1. Dynamic and Complex Urban Environments tion) to isolate objects of interest within im-
Smart cities are bustling with activity—pedestrians, ages. Effective segmentation depends heav-
vehicles, animals, shadows, reflections, and other ily on prior filtering and restoration, enabling
moving objects create complex visual scenes. The accurate identification and tracking of pedes-
variability and unpredictability of these environ- trians, vehicles, and other elements.
ments pose additional difficulties: 5. Image Compression
• Changing lighting: Daylight, artificial street Analyze different compression algorithms (such
lights, shadows, and weather changes alter as JPEG and wavelet-based methods) to bal-
image brightness and contrast throughout the ance storage and transmission efficiency with
day. the preservation of image quality necessary
for accurate recognition. Compression choices
• Motion blur: Fast-moving objects or cam- directly affect how much data can be stored
era vibrations cause blurring, obscuring de- and sent, and influence the performance of
tails. downstream tasks.
• Occlusions: Objects partially blocking each 6. Accurate Object Recognition
other complicate segmentation and recogni- Although optional, this objective focuses on
tion. enabling precise detection and classification
of objects in surveillance images. Recogni-
These factors necessitate adaptive and robust im-
tion depends on the quality of previous pro-
age processing techniques that can dynamically ad-
cessing stages—filtering, restoration, and seg-
just to changing conditions while maintaining per-
mentation—to deliver reliable real-time mon-
formance.
itoring results.
3
7. Achieving Real-Time Performance ity, extract meaningful information, and support
Design the overall system to process and an- reliable object recognition in real-time environments.
alyze image data quickly and efficiently, sup-
porting immediate decision-making and re-
7. Original Images
sponse. This requires balancing computa-
tional demands with hardware limitations,
ensuring the system can operate effectively
in real-world urban environments.
6. Technology
5: end for
N 2: Compute PSNR:
1 X
Mean(i, j) = Pk
N !
k=1 M AX 2
P SN R = 10 · log10
4: Replace pixel (i, j) with the mean value M SE
5: end for
3: Where M AX = 255 for 8-bit images
5
9.5. SSIM Calculation Laplacian Sharpening
% Apply Laplacian filter to detect edges
Algorithm 5 Structural Similarity Index (SSIM)
lap = fspecial(’laplacian’, 0.2);
1: Compute means: % Apply Laplacian filter to original
grayscale image
µx = mean(x), µy = mean(y) sharp = imfilter(gray, lap, ’replicate’);
% Subtract the Laplacian result from original
2: Compute variances and covariance:
to enhance edges (high-boost filtering)
σx2 = var(x), σy2 = var(y), σxy = cov(x, y) sharpened = imsubtract(im2double(gray),
im2double(sharp));
3: Compute SSIM: % Output: sharpened image with enhanced
edges but noise may be amplified
(2µx µy + C1 )(2σxy + C2 )
SSIM(x, y) =
(µ2x + µ2y + C1 )(σx2 + σy2 + C2 )
Trade-offs in Code Implementation
Edge Preservation Poor — edges are blurred Good — preserves edges by Enhances edges; edges be-
due to averaging selecting median come sharper
Median Filter Salt & Pepper 30.65 0.85 Removes noise effectively,
preserves edges
Computational Cost Low — convolution with Medium — sorting in each Low — linear filtering
small kernel window
Laplacian Filter No noise (sharpen) – – Sharpens edges, increases de-
tail
Implementation Complexity Simple with built-in convo- Slightly more complex (me- Simple convolution and
lution dian sorting) subtraction
Table 1: Performance Evaluation of Spatial Filters
Effect on Details Blurs details along with Preserves fine details Enhances details but sensi-
noise tive to noise
6
• To gain control over the frequency com- Algorithm 6 General Frequency Domain Filter-
ponents, especially when spatial filters pro- ing
duce poor results. 1: Input: Image I(x, y), Filter H(u, v)
2: Compute FFT: F (u, v) = F[I(x, y)]
11.2. Types of Frequency Domain Filters 3: Multiply: G(u, v) = H(u, v) · F (u, v)
11.2.1. 1. Low-Pass Filters 4: Inverse FFT: g(x, y) = F −1 [G(u, v)]
These filters allow low-frequency components to 5: Output: Filtered Image g(x, y)
pass and block high frequencies.
• Ideal LPF: Sharp cutoff, passes frequencies
12. Algorithms and Performance Metrics
inside a circle of radius D0 .
• Gaussian LPF: Smooth transition, suppresses 12.1. Frequency Domain Filtering
high-frequency noise gently. 12.2. PSNR and SSIM Evaluation
Same PSNR and SSIM algorithms as used in spa-
11.2.2. 2. High-Pass Filters tial filtering (refer back to Spatial Filtering sec-
These filters block low frequencies and enhance tion).
high frequencies, often used for edge detection and
sharpening. Filter Type Noise Type PSNR (dB) SSIM Visual Impact
Ideal HPF Noisy Edges 26.75 0.76 Enhances edges sharply, but may
• Gaussian HPF: Enhances fine details introduce ringing
smoothly. Notch Filter Periodic Noise 29.94 0.84 Removes periodic artifacts like
stripes or patterns
11.2.3. 3. Band-Pass and Band-Reject Fil- Table 3: Performance Evaluation of Frequency Filters
ters
• Band-Pass: Allows a specific range of fre-
quencies. Annotated Pseudocode for Frequency Fil-
ters
• Band-Reject (Notch): Blocks a narrow
range of frequencies (useful for removing pe- Gaussian Low-Pass Filter
riodic noise). % Apply Gaussian LPF in frequency domain
Output Image F = fft2(image);
Fshift = fftshift(F);
H = fspecial(’gaussian’, size(F), 30);
Filtered = Fshift .* H;
Result = real(ifft2(ifftshift(Filtered)));
% Output: Blurred image with
reduced high-frequency noise
7
Notch Filter for Periodic Noise 13.2.2. 2. Lucy-Richardson Deblurring
% Identify and suppress periodic noise An iterative deconvolution algorithm used for re-
H = ones(size(F)); moving motion blur or out-of-focus blur, based on
% Manually zero-out notch regions around a maximum likelihood estimation approach.
noise frequencies Output Images
H(center(1)+30, center(2)-20) = 0;
H(center(1)-30, center(2)+20) = 0;
Filtered = Fshift .* H;
Result = real(ifft2(ifftshift(Filtered)));
% Output: Image without periodic noise
Noise Removal Good for high-frequency Not suitable for random Excellent for periodic
noise noise noise
Edge Preservation Poor (edges are Good (edges enhanced) Depends on notch place-
smoothed) ment
Computational Cost Medium — requires Medium — FFT + filter- High — requires precise
FFT/IFFT ing frequency notch design
Implementation Complexity Moderate — Gaussian Slightly higher (HPF High — requires manual
masks easy to generate mask design) frequency analysis
Effect on Details Blurs fine details Enhances fine details, may Preserves details while re-
amplify noise moving specific artifacts
Usage Scenario Denoising and smoothing Edge enhancement Removing periodic arti-
facts (e.g., stripes, grids) Figure 3: image restoration using wiener
filtering,Lucy-Richardson
Table 4: Trade-offs in Frequency Domain Filters
8
14.2. Lucy-Richardson Deblurring % Apply Wiener filter to degraded image
% Output: restored image with reduced noise
Algorithm 8 Lucy-Richardson Algorithm for De- and blur
blurring
1: Initialize restored image estimate (usually uni-
form or degraded image) Lucy-Richardson Deblurring
2: for each iteration do % Input: degraded image and PSF
3: Convolve current estimate with PSF % Initialize restored image estimate
4: Compute ratio of degraded image to this % Repeat for a fixed number of iterations:
convolution - Convolve estimate with PSF
5: Convolve ratio with flipped PSF - Calculate ratio of degraded image to
6: Update estimate by multiplying previous convolution result
estimate with result of convolution - Convolve ratio with flipped PSF
7: end for - Update estimate by multiplication
8: Output restored image estimate after itera- % Output: restored image with enhanced
tions sharpness but possible noise amplification
m n
1 XX 2
M SE = I(i, j) − R(i, j) Noise Handling Good at reducing Gaussian
noise; smooths image
Can amplify noise if iterations
are excessive
mn i=1 j=1
Usage Scenario Suitable when noise and blur Best for images with motion blur
statistics are known and known PSF
Method PSNR (dB) SSIM Visual Impact
Wiener Filter 28.56 0.82 Reduces noise and blur moder- Table 6: Trade-offs in Image Restoration Techniques
ately; slight smoothing effect
Lucy-Richardson 30.12 0.85 Effectively removes motion blur; 16. Image Segmentation
may amplify noise if iterations
are too many
Image segmentation is a fundamental step in com-
puter vision and image analysis. It involves par-
Table 5: Performance Evaluation of Restoration
titioning an image into meaningful regions to iso-
Techniques
late objects or features. This process enables tasks
such as object recognition, tracking, and classifica-
15. Annotated Pseudocode for Each Method tion by distinguishing distinct components in the
image.
Wiener Filtering
% Input: degraded image and estimated PSF 16.1. Why Image Segmentation is Used
% Estimate noise-to-signal power ratio • To isolate objects or regions of interest
% Compute Wiener filter in frequency domain within an image for further analysis.
9
• To assist in recognition and classifica- 17. Algorithms and Performance Metrics
tion tasks by separating meaningful struc-
tures. 17.1. Canny Edge Detection
• To enhance medical images for identify- Algorithm 10 Canny Edge Detection Algorithm
ing anatomical structures or abnormalities. 1: Apply Gaussian blur to smooth the image
• To prepare data for machine learning 2: Compute image gradient (intensity change)
models by labeling pixels based on classes. 3: Perform non-maximum suppression to thin
edges
16.2. Types of Image Segmentation 4: Apply double threshold to identify strong and
weak edges
16.2.1. 1. Edge-Based Segmentation
5: Track edge connections via hysteresis
This approach detects object boundaries by iden-
tifying abrupt changes in pixel intensity.
• Canny Edge Detector: A multi-stage al- 17.2. Otsu Thresholding
gorithm that uses gradient calculation, non-
maximum suppression, and hysteresis thresh- Algorithm 11 Otsu’s Method for Global Thresh-
olding to extract precise edges. olding
1: Compute histogram of grayscale image
16.2.2. 2. Region-Based Segmentation 2: for each possible threshold t do
This method groups pixels with similar character- 3: Compute intra-class variance for back-
istics. ground and foreground
4: end for
• Otsu Thresholding: Separates the image
5: Select threshold t∗ that minimizes intra-class
into foreground and background by maximiz-
variance
ing inter-class variance.
6: Binarize image using t∗
• Watershed Algorithm: Views the grayscale
image as a topographic surface and finds the
“watershed lines” that separate distinct re- 17.3. Watershed Segmentation
gions.
Algorithm 12 Watershed Segmentation for Re-
Output Images gion Separation
1: Binarize image using Otsu threshold
2: Compute distance transform of binary image
3: Invert distance map to simulate basins
4: Create markers via extended minima trans-
form
5: Impose minima and apply watershed algorithm
2|GT ∩ P |
Dice =
|GT | + |P |
10
Segmentation Method Technique Type IoU Dice Remarks
Trade-offs in Code Implementation
Table 7: Performance Evaluation of Image Segmen- Computational Cost Low — gradient and Very low — simple his- Moderate — uses distance
tation Techniques thresholding togram analysis maps and minima imposi-
tion
Implementation Complexity Simple (built-in function) Very simple Complex (requires multiple
18. Annotated Pseudocode for Each Method steps)
Canny Edge Detection Usage Scenario Highlighting contours in Foreground-background Detailed object isolation in
structured images segmentation clustered regions
11
19.2.2. 2. Wavelet Compression 20.3. Compression Ratio
Wavelet compression uses multi-resolution analy-
sis to represent images. It preserves better visual Algorithm 16 Compression Ratio Calculation
quality at higher compression ratios. 1: Compute:
• Transforms image using Haar or other wavelets. Original Size
Compression Ratio =
• High-frequency components (detail) can Compressed Size
be selectively discarded.
2: Higher value indicates better compression
Output Images
Wavelet (Haar Level 2) 10.8:1 0.72 3.8% Smoother loss of detail, fewer
20. Algorithms and Performance Metrics artifacts
12
.astype(np.uint8); • To understand the impact of preprocess-
Image.fromarray(img wavelet) ing steps on recognition accuracy.
.save(’wavelet compressed.jpg’);
% Output: image compressed using wavelets 22.2. Types of Object Recognition Models
22.2.1. 1. Pre-trained Models
• YOLO (You Only Look Once): Real-
Trade-offs in Code Implementation time object detection model known for speed
and decent accuracy.
• Faster R-CNN: Region-based Convolutional
Aspect JPEG Compression Wavelet Compression Neural Network with high accuracy but slower
inference.
Compression Ratio High compression with ad- Moderate compression;
justable quality more scalable with levels
22.2.2. 2. Custom-Trained Models
Models trained on domain-specific datasets, opti-
mized for particular object categories or environ-
Visual Quality Can introduce blocking ar-
tifacts
Preserves smooth transi-
tions better
ments.
Output Images
Noise Robustness Sensitive to noise; blocks Better at suppressing noise
may distort in high-frequency regions
Table 10: Discussion of Trade-offs in Compression Figure 6: Object Recognition Images of vehicles
Techniques
thresholds.
Accuracy Good accuracy for many Higher accuracy, espe- Can be optimized for tar-
Model Accuracy (%) Precision Recall Remarks classes cially on small objects get dataset
YOLOv5 (Pre-trained) 85.4 0.88 0.83 Fast inference; good for real-time
applications
Complexity Lightweight, easy to de- Complex architecture, re- Requires training data
ploy quires more resources and tuning
Faster R-CNN (Pre-trained) 90.2 0.92 0.89 Higher accuracy but slower; suit-
able for offline processing
Flexibility Fixed classes from pre- Flexible for fine-tuning Fully customizable for
Custom-Trained Model 87.0 0.89 0.85 Optimized for specific dataset; trained weights new classes
balanced speed and accuracy
Table 11: Performance Comparison of Object Recog- Application Real-time detection tasks High-accuracy offline Domain-specific recogni-
tasks tion
nition Models
Maryam Asad
Maryam Asad 7th semester student, is a Junior IT Specialist at Eco Out-
sourcing with experience in Microsoft 365, Intune, SharePoint, Teams, and
CRM development. I am currently enhancing my skills through a focused 6-
month training plan covering security, automation, and IT service delivery.
Passionate about AI and cybersecurity, I am also working on research and
academic projects, including post-quantum cryptography and intelligent au-
tomation tools. I am aims to bridge technical solutions with real-world busi-
ness needs.
Zara
I’m a 4th-semester BS Computer Science student at the University of Man-
agement and Technology. I’m passionate about coding, digital design, and
innovation. With a strong foundation in C++ and a knack for creativity, I’m
always looking for opportunities to learn, grow, and make a difference.
Anees Akram
Motivated and enthusiastic BS Computer Science student (4th Semester) in
University of Management and Technology, with strong skills in C++ and dig-
ital design. Known for excellent communication, responsiveness, and eagerness
to learn. Seeking opportunities to grow in dynamic and challenging environ-
ments.
https://www.overleaf.com/read/mfvjqjnqgsxg#
cc3708
15