KEMBAR78
Ccs349 Image and Video Analytics | PDF | Matrix (Mathematics) | Engineering
0% found this document useful (0 votes)
40 views34 pages

Ccs349 Image and Video Analytics

The document outlines the vision, mission, and educational objectives of the V V College of Engineering's Department of Computer Science and Engineering. It details various programs and algorithms related to image and video analytics, including T-pyramid, quad tree representation, geometric transforms, and object detection using Python. Additionally, it includes example Python code for implementing these concepts and achieving successful outputs.

Uploaded by

suryaksurya13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views34 pages

Ccs349 Image and Video Analytics

The document outlines the vision, mission, and educational objectives of the V V College of Engineering's Department of Computer Science and Engineering. It details various programs and algorithms related to image and video analytics, including T-pyramid, quad tree representation, geometric transforms, and object detection using Python. Additionally, it includes example Python code for implementing these concepts and achieving successful outputs.

Uploaded by

suryaksurya13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

lOMoARcPSD|39469749

V V COLLEGE OF ENGINEERING
(Approved By AICTE, New Delhi and Affiliated To Anna University Chennai)
V V Nagar, Arasoor ,Tisaiyanvilai,Sathankulam Taluk, Tuticorin District - 628 656.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CCS 349 IMAGE AND VIDEO ANALYTICS LAB


R 2021

Name :

Reg No :

Dept :

Batch :
Academic Year :

Staff Signature
lOMoARcPSD|39469749

VV COLLEGE OF ENGINEERING
(Approved By AICTE, New Delhi and Affiliated To Anna University Chennai)
V V Nagar, Arasoor ,Tisaiyanvilai,Sathankulam Taluk, Tuticorin District - 628 656.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

College Vision and Mission Statement

Vision

Emerge as a premier technical institution of global standards, producing enterprising, knowledgeable

engineers and entrepreneurs.

Mission

 Impart quality and contemporary technical education for rural students.

 Have the state of the art infrastructure and equipment for quality learning.

 Enable knowledge with ethics, values and social responsibilities.

 Inculcate innovation and creativity among students for contribution to society.

Vision and Mission of the Department of Computer Science and Engineering

Vision

Produce competent and intellectual computer science graduates by empowering them to compete

globally towards professional excellence.

Mission

 Provide resources, environment and continuing learning processes for better exposure to the latest

and contemporary technologies in Computer Science and Engineering.

 Encourage creativity and innovation and the development of self-employment through knowledge

and skills, for contribution to society

 Provide quality education in Computer Science and Engineering by creating a platform to enable

coding, problem-solving, design, development, testing and implementation of solutions for the

benefit of society.
lOMoARcPSD|39469749

VV COLLEGE OF ENGINEERING
(Approved By AICTE, New Delhi and Affiliated To Anna University Chennai)
V V Nagar, Arasoor ,Tisaiyanvilai,Sathankulam Taluk, Tuticorin District - 628 656.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Program Educational Objectives


Graduates can
 Apply their technical competence in computer science to solve real world problems, with technical
and people leadership.
 Conduct cutting edge research and develop solutions on problems of social relevance.
 Work in a business environment, exhibiting team skills, work ethics, adaptability and lifelong
learning.

PROGRAM OUTCOMES (POs):


PO1: Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals and an
engineering specialization to the solution of complex engineering problems.
PO2: Problem analysis: Identify, formulate, review research literature, and analyze complex engineering problems
reaching substantiated conclusions using first principles of mathematics, natural sciences, and engineering sciences.
PO3: Design/development of solutions: Design solutions for complex engineering problems and design system
components or processes that meet the specified needs with appropriate consideration for the public health and safety, and
the cultural, societal, and environmental considerations.
PO4: Conduct investigations of complex problems: Use research-based knowledge and research methods including
design of experiments, analysis and interpretation of data, and synthesis of the information to provide valid conclusions.
PO5: Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering and IT
tools including prediction and modeling to complex engineering activities with an understanding of the limitations.
PO6: The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal, health,
safety, legal and cultural issues and the consequent responsibilities relevant to the professional engineering practice.
PO7: Environment and sustainability: Understand the impact of the professional engineering solutions in societal and
environmental contexts, and demonstrate the knowledge of, and need for sustainable development.
PO8: Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
PO9: Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and
in multidisciplinary settings.
PO10: Communication: Communicate effectively on complex engineering activities with the engineering community
and with society at large, such as, being able to comprehend and write effective reports and design documentation, make
effective presentations, and give and receive clear instructions.
lOMoARcPSD|39469749

v v COLLEGE OF ENGINEERING
(Approved By AICTE, New Delhi and Affiliated To Anna University Chennai)
V V Nagar, Arasoor ,Tisaiyanvilai,Sathankulam Taluk, Tuticorin District - 628 656.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

PO8: Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
PO9: Individual and team work: Function effectively as an individual, and as a member or leader in diverse teams, and
in multidisciplinary settings.
PO10: Communication: Communicate effectively on complex engineering activities with the engineering community
and with society at large, such as, being able to comprehend and write effective reports and design documentation, make
effective presentations, and give and receive clear instructions.
PO11. Project management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to manage projects and in
multidisciplinary environments.
PO12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in independent and
life-long learning in the broadest context of technological change.

Program Specific Outcomes(PSOs)

PSO1: Exhibit design and programming skills to build and automate business solutions using cutting edge
technologies.
PSO2: Strong theoretical foundation leading to excellence and excitement towards research, to provide elegant
solutions to complex problems.
PSO3: Ability to work effectively with various engineering fields as a team to design, build and develop
system applications
lOMoARcPSD|39469749

INDEX
EX DATE TITLE PAGE MARKS SIGN
NO. NO.
EX NO: T-PYRAMID OF AN IMAGE
DATE:

AIM:
To write python program for T- pyramid of an image.

ALGORITHM:
1.1 First load the image
1.2 Then construct the Gaussian pyramid with 3 levels.
1.3 For the Laplacian pyramid, the topmost level remains the same as in Gaussian.
The remaining levels are constructed from top to bottom by subtracting that Gaussian
level from its upper expanded level.
lOMoARcPSD|39469749

PROGRAM:
import cv2
import numpy as np

def build_t_pyramid(image,levels):
pyramid=[image]

for _ in range (levels-1):


image=cv2.pyrDown(image)
pyramid.append(image)
return pyramid

def main():
image_path="img.jpg"
levels=3
original_image=cv2.imread(image_path)

if original_image is None:
print("Error:vould not load the image")
return

t_pyramid=build_t_pyramid(original_image,levels)
for i,level_image in enumerate(t_pyramid):
cv2.imshow(f"Levels{i}",level_image)
cv2.waitkey(0)
cv2.destroyAllWindows()

if name ==" main ":


main()
lOMoARcPSD|39469749

OUTPUT:

RESULT:
Thus the python program for T-pyramid implemented and the output is obtained
successfully.
lOMoARcPSD|39469749

EX NO: QUAD TREE REPRESENTATION


DATE:

AIM:
To write a python program for quad tree representation of an image using the
homogeneity criterion of equal intensity.

ALGORITHM:
1 Divide the current two dimensional space into four boxes.
2 If a box contains one or more points in it, create a child object, storing in it the
two dimensional space of the box
3 If a box does not contain any points, do not create a child for it
4 Recurse for each of the children.
lOMoARcPSD|39469749

PROGRAM:
import matplotlib.pyplot as plt
import cv2
import numpy as np

img= cv2.imread("img.jpg")
from operator import add
from functools import reduce

def split4(image):
half_split= np.array_split(image,2)
res= map(lambda x: np.array_split(x,2,axis=1),half_split)
return reduce(add,res)
split_img=split4(img)
split_img[0].shape,split_img[1].shape
fig,axs=plt.subplots(2,2)
axs[0,0].imshow(split_img[0])
axs[0,1].imshow(split_img[1])
axs[1,0].imshow(split_img[2])
axs[1,1].imshow(split_img[3])

def concatenate4(north_west,north_east,south_west,south_east):
top=np.concatenate((north_west,north_east),axis=1)
bottom=np.concatenate((south_west,south_east),axis=1)
return np.concatenate((top,bottom),axis=0)

full_img=concatenate4(split_img[0],split_img[1],split_img[2],split_img[3])
plt.figure()
plt.imshow(full_img)
plt.show()
lOMoARcPSD|39469749

OUTPUT:

RESULT:
Thus the python program for quad tree representation was implementation and
output is obtained successfully.
lOMoARcPSD|39469749

EX NO: GEOMETRIC TRANSFORMS


DATE:

AIM:
To Develop programs for the following geometric transforms:
a Rotation.
b Change of scale.
c Skewing.
d Affine transform calculated from three pairs of corresponding points.
(e)Bilinear transform calculated from four pairs of corresponding points.

ALGORITHM:
TRANSFORMATION MATRICES:
For each desired transformation, create a corresponding transformation matrix. For
example:
d.1 Translation: Create a 3×3 matrix with a 1 in the diagonal and the
translation values in the last column.
d.2 Rotation: Compute the rotation matrix using trigonometric functions
(sin and cos) and the given rotation angle.
d.3 Scaling: Create a 3×3 matrix with scaling factors along the diagonal
and 1 in the last row and column.
d.4 Shearing: Create an affine transformation matrix with shear
factors in the off diagonal elements.
COMBINE TRANSFORMATION MATRICES:
d.5 Multiply the individual transformation matrices in the order you want to
apply them. Matrix multiplication is not commutative, so the order matters. The combined
matrix represents the sequence of transformations.
APPLY THE COMBINED TRANSFORMATION MATRIX:
In image processing, you can use libraries like OpenCV or Pillow to apply the
combined transformation matrix to the image. For example, in OpenCV:
d.6 Convert the 3×3 matrix to a 2×3 matrix by removing the last row.
d.7 Use cv2.warpAffine() for affine transformations or
cv2.warpPerspective() for projective transformations.
d.8 Provide the combined transformation matrix and the input image as
arguments to apply the transformations.
lOMoARcPSD|39469749

PROGRAM:
import cv2
import numpy as np
def rotate_image(image, angle):
height, width = image.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((width / 2, height / 2), angle, 1)
rotated_image = cv2.warpAffine(image, rotation_matrix, (width, height))
return rotated_image
# Usage
image = cv2.imread("img.jpg")
angle_degrees = 45
rotated = rotate_image(image, angle_degrees)
cv2.imshow("Rotated Image", rotated)
cv2.waitKey(0)
cv2.destroyAllWindows()

def scale_image(image, scale_x, scale_y):


scaled_image = cv2.resize(image, None, fx=scale_x, fy=scale_y)
return scaled_image
# Usage
image = cv2.imread("img.jpg")
scale_factor_x = 1.5
scale_factor_y = 1.5
scaled = scale_image(image, scale_factor_x, scale_factor_y)
cv2.imshow("Scaled Image", scaled)
cv2.waitKey(0)
cv2.destroyAllWindows()

def skew_image(image, skew_x, skew_y):


height, width = image.shape[:2]
skew_matrix = np.float32([[1, skew_x, 0], [skew_y, 1, 0]])
skewed_image = cv2.warpAffine(image, skew_matrix, (width, height))
return skewed_image
lOMoARcPSD|39469749

# Usage
image = cv2.imread("img.jpg")
skew_factor_x = 0.2
skew_factor_y = 0.1
skewed = skew_image(image, skew_factor_x, skew_factor_y)
cv2.imshow("Skewed Image", skewed)
cv2.waitKey(0)
cv2.destroyAllWindows()

def affine_transform(image, pts_src, pts_dst):


matrix = cv2.getAffineTransform(pts_src, pts_dst)
transformed_image = cv2.warpAffine(image, matrix, (image.shape[1],
image.shape[0]))
return transformed_image
image = cv2.imread("img.jpg")
src_points = np.float32([[50, 50], [200, 50], [50, 200]])
dst_points = np.float32([[10, 100], [200, 50], [100, 250]])
affine_transformed = affine_transform(image, src_points, dst_points)
cv2.imshow("Affine Transformed Image", affine_transformed)
cv2.waitKey(0)
cv2.destroyAllWindows()

def bilinear_transform(image, pts_src, pts_dst):


matrix = cv2.getPerspectiveTransform(pts_src, pts_dst)
transformed_image = cv2.warpPerspective(image, matrix, (image.shape[1],image.shape[0]))
return transformed_image

image = cv2.imread("img.jpg")
src_points = np.float32([[56, 65], [368, 52], [28, 387], [389, 390]])
dst_points = np.float32([[0, 0], [300, 0], [0, 300], [300, 300]])
bilinear_transformed = bilinear_transform(image, src_points, dst_points)
lOMoARcPSD|39469749

cv2.imshow("Bilinear Transformed Image", bilinear_transformed)


cv2.waitKey(0)
cv2.destroyAllWindows()

OUTPUT:
Rotation:

Skewing:
lOMoARcPSD|39469749

Change Of Scale:

Affine transform calculated from three pairs of corresponding points:


lOMoARcPSD|39469749

Bi linear Transform from Four Corresponding Points:

RESULT:
Thus the python program for geometric transforms implemented and output is
obtained successfully.
lOMoARcPSD|39469749

EX NO: OBJECT DETECTION AND RECOGNITION


DATE:

AIM:
To Develop a program to implement Object Detection and Recognition.

ALGORITHM:
1.4 The first step is to have Python installed on your computer. Download and
install Python 3 from the official Python website.
1.5 Once you have Python installed on your computer, install the
following dependencies using pip:
Python
$ pip install python
3.7.6 TensorFlow
$ pip install tensorflow
OpenCV
$ pip install opencv-python
Keras
$ pip install keras
ImageAI
$ pip install imageAI
1.6 Nowdownload the TinyYOLOv3 model file that contains the classification
model that will be used for object detection.

1.7 Now let’s see how to actually use the ImageAI library.
We need the necessary folders:

Object detection: root folder.

models: stores pre-trained model.

input: stores image file on which we want to perform object detection.

output: stores image file with detected objects.


lOMoARcPSD|39469749

Input image:

5 Open your preferred text editor for writing Python code and create a
new file detector.py.
6 Running the python file detector.py.
lOMoARcPSD|39469749

PROGRAM:
# importing the required library
from imageai.Detection import ObjectDetection

# instantiating the class


detector = ObjectDetection()

# defining the paths


path_model = "yolo-tiny.h5"
path_input = "./Input/images.jpg"
path_output = "./Output/newimage.jpg"

# using the setModelTypeAsTinyYOLOv3() function


detector.setModelTypeAsTinyYOLOv3()
# setting the path of the Model
detector.setModelPath(path_model)
# loading the model
detector.loadModel()
# calling the detectObjectsFromImage() function
detection = detector.detectObjectsFromImage(
input_image = path_input,
output_image_path = path_output
)

# iterating through the items found in the image


for eachItem in detection:
print(eachItem["name"] , " : ", eachItem["percentage_probability"])
lOMoARcPSD|39469749

OUTPUT:

1/1 [==============================] - ETA: 0s


1/1 [==============================] - 0s 393ms/step
car : 81.67955875396729
car : 86.47009134292603
car : 71.90941572189331
car : 51.41249895095825
car : 50.27420520782471
car : 54.530930519104004
person : 68.99164915084839
person : 85.42444109916687
car : 66.63046479225159
person : 73.05858135223389
person : 60.30835509300232
person : 74.38961267471313
person : 58.86450409889221
car : 82.88856148719788
car : 77.34288573265076
lOMoARcPSD|39469749

person : 69.11083459854126
person : 63.95843029022217
person : 62.82603144645691
person : 82.48097896575928
person : 84.3036949634552
person : 57.25393295288086
>>>

RESULT:
Thus the python program for Object Detection and Recognition implemented and
output is obtained successfully.
lOMoARcPSD|39469749

EX NO: MOTION ANALYSIS USING MOVING EDGES


DATE:

AIM:
To Develop a program for motion analysis using moving edges, and apply it to your
image sequences.

ALGORITHM:
Objective
Creating automated Laban movement annotation:
1.8 Training four different machine learning algorithms through supervised learning
on existing human motion datasets of video and skeletal sequences.
1.9 Test feature extraction methods (within and across frames) to improve
the annotation accuracy.
1.10 Input raw videos and export Laban annotated videos.

PROGRAM:
import cv2
import numpy as np
# Function to perform motion analysis using moving edges
def motion_analysis(video_path):
cap = cv2.VideoCapture(video_path)

# Read the first frame


ret, prev_frame = cap.read()
prev_gray = cv2.cvtColor(prev_frame, cv2.COLOR_BGR2GRAY)

while cap.isOpened():
ret, frame = cap.read()
if not ret:
break

# Convert the current frame to grayscale


gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
lOMoARcPSD|39469749

# Perform Canny edge detection on both frames


edges_prev = cv2.Canny(prev_gray, 50, 150)
edges_curr = cv2.Canny(gray, 50, 150)

# Compute frame difference to detect moving edges


frame_diff = cv2.absdiff(edges_prev, edges_curr)

# Display the moving edges


cv2.imshow('Moving Edges', frame_diff)
if cv2.waitKey(30) & 0xFF == ord('q'):
break

# Update the previous frame and previous grayscale image


prev_gray = gray.copy()

cap.release()
cv2.destroyAllWindows()
# Replace 'path_to_video.mp4' with your video file path
video_path = "Human Analytics video.mp4"
motion_analysis(video_path)
lOMoARcPSD|39469749

OUTPUT:

RESULT:
Thus the python program for motion analysis using moving edge was implemented
and output is obtained successfully.
lOMoARcPSD|39469749

EX NO: FACIAL DETECTION AND RECOGNITION


DATE:

AIM:
To Develop a program for Facial Detection and Recognition.

ALGORITHM:
Face Detection:
The very first task we perform is detecting faces in the image or video stream. Now
that we know the exact location/coordinates of face, we extract this face for further
processing ahead.
Feature Extraction:
Now that we have cropped the face out of the image, we extract features from it. Here
we are going to use face embeddings to extract the features out of the face. A neural network
takes an image of the person’s face as input and outputs a vector which represents the most
important features of a face. In machine learning, this vector is called embedding and thus
we call this vector as face embedding.
ARCHITECTURE:

Face Recognition:
Face recognition technology is a method of identifying or confirming an individual’s
identity using their face. It operates through biometric analysis, which involves measuring
and analysing specific biological characteristics.
1.11 Collecting face images using OpenCV and saving them in a folder.
1.12 Training an image classification model using Teachable
Machine, a web based tool by Google.
1.13 Downloading the model in Keras format and loading it in Python.
1.14 Detecting faces from a webcam and predicting their names using the
trained
model.
lOMoARcPSD|39469749

ROGRAM:
Face Detection:
import cv2

# Load the pre-trained face detection classifier


face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades +
'haarcascade_frontalface_default.xml')

# Load the image


image_path = 'img1.jpg' # Replace 'image.jpg' with the path to your image
image = cv2.imread(image_path)

# Convert the image to grayscale


gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Detect faces in the image


faces = face_cascade.detectMultiScale(gray_image, scaleFactor=1.1, minNeighbors=5,
minSize=(30, 30))

# Draw rectangles around the detected faces


for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)

# Display the result


cv2.imshow('Facial Detection', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Face Recognition:
Datacollect.py:
import cv2
import os

video=cv2.VideoCapture(0)
lOMoARcPSD|39469749

facedetect=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

count=0

nameID=str(input("Enter Your Name: ")).lower()

path='images/'+nameID

isExist = os.path.exists(path)

if isExist:
print("Name Already Taken")
nameID=str(input("Enter Your Name Again: "))
else:
os.makedirs(path)

while True:
ret,frame=video.read()
faces=facedetect.detectMultiScale(frame,1.3, 5)
for x,y,w,h in faces:
count=count+1
name='./images/'+nameID+'/'+ str(count) + '.jpg'
print("Creating Images........" +name)
cv2.imwrite(name, frame[y:y+h,x:x+w])
cv2.rectangle(frame, (x,y), (x+w, y+h), (0,255,0), 3)
cv2.imshow("WindowFrame", frame)
cv2.waitKey(1)
if count>500:
break
video.release()
cv2.destroyAllWindows()
lOMoARcPSD|39469749

test.py:
import tensorflow as tf
from tensorflow import keras
import numpy as np
import cv2
from keras.models import load_model
import numpy as np

facedetect = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

cap=cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
font=cv2.FONT_HERSHEY_COMPLEX

model = load_model('keras_model.h5',compile=False)

def get_className(classNo):
if classNo==0:
return "Paranjothi Karthik"
elif classNo==1:
return "virat"

while True:
sucess, imgOrignal=cap.read()
faces = facedetect.detectMultiScale(imgOrignal,1.3,5)
for x,y,w,h in faces:
crop_img=imgOrignal[y:y+h,x:x+h]
img=cv2.resize(crop_img, (224,224))
img=img.reshape(1, 224, 224, 3)
prediction=model.predict(img)
classIndex = (model.predict(img) > 0.5).astype("int32")
classIndex = classIndex.any()
lOMoARcPSD|39469749

probabilityValue=np.amax(prediction)
if classIndex==0:
cv2.rectangle(imgOrignal,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(imgOrignal, (x,y-40),(x+w, y), (0,255,0),-2)
cv2.putText(imgOrignal, str(get_className(classIndex)),(x,y-10), font,
0.75, (255,255,255),1, cv2.LINE_AA)
elif classIndex==1:
cv2.rectangle(imgOrignal,(x,y),(x+w,y+h),(0,255,0),2)
cv2.rectangle(imgOrignal, (x,y-40),(x+w, y), (0,255,0),-2)
cv2.putText(imgOrignal, str(get_className(classIndex)),(x,y-10), font,
0.75, (255,255,255),1, cv2.LINE_AA)

cv2.putText(imgOrignal,str(round(probabilityValue*100, 2))+"%" ,(180, 75),


font, 0.75, (255,0,0),2, cv2.LINE_AA)
cv2.imshow("Result",imgOrignal)
k=cv2.waitKey(1)
if k==ord('q'):
break

cap.release()
cv2.destroyAllWindows()
OUTPUT:
lOMoARcPSD|39469749

RESULT:
Thus the python program for Facial Detection and Recognition was implemented and
output is obtained successfully.
lOMoARcPSD|39469749

EX NO: EVENT DETECTION IN VIDEO


SURVEILLANCE SYSTEM
DATE:

AIM:
To Write a program for event detection in video surveillance system.

ALGORITHM:

1 Preprocessing:
 This stage involves cleaning and preparing the data from sensors like cameras.
This might include noise reduction or format conversion.
2 Background Modeling:
 This step establishes a baseline for "normal" activity in the scene. It can use
techniques like:
 Frame differencing: Compares consecutive video frames to detect
changes (movement).
 Statistical methods: Builds a model of the background based on pixel intensity
variations over time.
3 Object Detection and Tracking:
 This stage identifies and tracks objects of interest (people, vehicles) in the
scene. Common techniques include:
 Background subtraction: Isolates foreground objects from the background model.
 Machine Learning: Employs algorithms like Support Vector Machines (SVMs) or
Convolutional Neural Networks (CNNs) to identify objects based on training
data.
4 Event Definition and Classification:
 Here, the system analyzes object behavior and interactions to define events.
This might involve:
 Motion analysis: Tracks object movement patterns and speed.
 Object interaction: Analyzes how objects interact with each other or the
environment (e.g., entering restricted zones).
 Classification algorithms (e.g., decision trees, rule-based systems) then categorize
these events (loitering, fighting, etc.).
5 Decision Making and Alerting:
 Finally, the system evaluates the classified event's severity and triggers pre-
defined actions based on rules. This might involve:
lOMoARcPSD|39469749

 Generating alerts for security personnel.


 Recording video footage of the event.

PROGRAM:

import cv2

# Initialize video capture

video_capture = cv2.VideoCapture("human surveillance.mp4") # Replace with your video


file

# Initialize background subtractor

bg_subtractor = cv2.createBackgroundSubtractorMOG2()
while video_capture.isOpened():

ret, frame = video_capture.read()


if not ret:

break

# Apply background subtraction


fg_mask = bg_subtractor.apply(frame)

# Apply thresholding to get a binary mask

_, thresh = cv2.threshold(fg_mask, 50, 255, cv2.THRESH_BINARY)


# Find contours

contours, _ = cv2.findContours(thresh,
cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

for contour in contours:

# Filter contours based on area (adjust the threshold as needed)


if cv2.contourArea(contour) > 100:

# Draw a bounding box around detected objects or events

x, y, w, h = cv2.boundingRect(contour)

cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)


# Display the processed frame
lOMoARcPSD|39469749

cv2.imshow('Video', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):

break

# Release video capture and close OpenCV windows

video_capture.release()
cv2.destroyAllWindows()

OUTPUT:

RESULT:
Thus the python program for event detection in video surveillance system was
implemented and output is obtained successfully.

You might also like