KEMBAR78
Project Report Final-1 | PDF | Visual Impairment | Visual Perception
0% found this document useful (0 votes)
56 views36 pages

Project Report Final-1

Uploaded by

sravan murali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views36 pages

Project Report Final-1

Uploaded by

sravan murali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Project Report 2024

SMART GLASS FOR BLIND

PROJECT REPORT
A report submitted in the fulfillment of the award for the degree of

BACHELOR OF TECHNOLOGY
in

ELECTRICAL AND ELECTRONICS ENGINEERING

Submitted by
ARJUN G M
GOPIKA G S
MELIN SIMON
SRAVAN MURALI

MAY 2024

DEPARTMENT OF ELECTRICAL AND ELECTRONICS

COLLEGE OF ENGINEERING ATTINGAL

(Established by IHRD Govt. of Kerala Undertaking)


Phone: 0470-2627400
Project Report 2024

COLLEGE OF ENGINEERING ATTINGAL


(Established by IHRD, Govt. Of Kerala Undertaking)

CERTIFICATE

Department of Electrical and Electronics Engineering

This is to certify that the report entitled “SMART GLASS FOR BLIND” submitted by
ARJUN G M (CEA20EE011), GOPIKA G S (CEA20EE015), MELIN SIMON
(CEA20EE017) and SRAVAN MURALI (CEA20EE023) is a bonafide record of the project
done by them under our guidance and supervision towards the partial fulfillment of the
requirement as a part of the curriculum in Electrical and Electronics Engineering for the Degree
of Bachelor of Technology by APJ Abdul Kalam Technological University (KTU). This report
in any form has not been submitted to any other University or Institute for any purpose.

Head of the Department: Project Guide:

Sri. Sreejith S Smt. Sneha V L

Assistant Professor Assistant Professor

Department of EEE Department of EEE

College of Engineering Attingal College of Engineering Attingal


Smart Glass For Blind Project Report 2024

ACKNOWLEDGEMENT

On this occasion of presenting the project report we wish express our deep and profound
feelings of gratitude to a number of people who have contributed to the completion of the
project. We would like to express our heartfelt and deep sense of gratitude to Dr. Vrinda V
Nair, principal College of Engineering Attingal for the unwavering support he has given. We
would like to thank Sri Sreejith S, Head of Department of Electrical and Electronics
Engineering for all the help and support rendered without which this effort would not have
seen light. It is a pleasure to convey our sincere gratitude to our project guide Smt. Sneha V L,
for her valuable guidance in selecting this project and for her wise suggestions and support in
difficult situations.

We also thank our families and friends for their suggestions, criticism and assistance towards
the improvement and successful completion of this report. Finally, we thank the Almighty for
his blessings throughout our work.

ARJUN G M(CEA20EE011)
GOPIKA G S(CEA20EE015)
MELIN SIMON(CEA20EE017)
SRAVAN MURALI(CEA20EE023)

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

DECLARATION

We hereby declare that the project report “SMART GLASS FOR BLIND”, submitted for
partial fulfillment of the requirements for the award of degree of Bachelor of Technology of
the APJ Abdul Kalam Technological University, Kerala is a bonafide work done by us under
supervision of Sri. Sreejith sir of the Department of Electrical and Electronics Engineering.
This submission represents our ideas in our own words and where ideas or words or others
have been included, we have adequately and accurately cited and referenced the original
sources. We also declare that we have adhered to ethics of academic honesty and integrity
and have not misrepresented or fabricated any data or idea or fact or source in our
submission. We understand that any violation of the above will be a cause for disciplinary
action by the Institute and / or the University and can also evoke penal action from the
sources which have thus not been properly cited or from whom proper permission has not
been obtained.

ATTINGAL
ARJUN G M(CEA20EE011)
GOPIKA G S(CEA20EE015)
MELIN SIMON(CEA20EE017)
SRAVAN MURALI(CEA20EE023)

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

ABSTRACT

These "Smart Glasses" are designed to help the blind people to read the typed text which is
written in the English language. These kinds of inventions consider a solution to motivate blind
students to complete their education despite all their difficulties. Its main objective is to develop
a new way of reading texts for blind people and facilitate their communication. The first task of
the glasses is to scan any object and convert it into audio text, so the person will listen to the
audio through a headphone that's connected to the glasses. The second task is to translate the
whole text or some words of it by pressing a button that is connected to the glasses as well. The
glasses used many technologies to perform its tasks which is gTTS and Google translation.
Detecting the text in the image was done using the OpenCV. In order to convert the text into
speech, it used Text to Speech technology (gTTS). For translating the text, the glasses used
Google translation API. The glasses are provided by controller, measure the required distance
between the user and the object that has an image to be able to take a clear picture. The picture
will be taken when the user presses the button. All the computing and processing operations
were done using the Raspberry Pi version 5. However, the glasses have some drawbacks such as:
supporting only the English language and the maximum distance of capturing the images is
between 40-150 cm. As a future plan, it is possible to support many languages and enhance the
design to make it smaller and more comfortable to wear. “Smart Glasses” encourage blind
people or people with vision difficulties to learn and succeed in many different fields especially
in educational field.

ARJUN G M (CEA20EE011)
GOPIKA G S (CEA20EE015)
MELIN SIMON (CEA20EE017)
SRAVAN MURALI (CEA20EE023)

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

LIST OF FIGURES

SL. PAGE
FIGURE CONTENT
No No.

1 6.1.1 Glass…………………………………………………………... 6

2 6.1.2 Controller………………….…………………………………… 6

3 6.1.3 Webcam………………………………………………………… 7

4 6.1.4 Headphone…………………………………….………………… 7

5 6.1.5 Powerbank……………………………………………………… 8

6 6.1.6 SD card……………………………….…………………………. 8

7 6.1.7 Pushbutton………………………………..……………………... 8

8 7.1 Flowchart demonstration……………….………………………. 10

9 7.2 Initial design…………………………………………………….. 10

10 8.1 Block diagram…………………….……………………………... 11

11 9.1 Pin diagram of Raspberry pi 5……………………..……………. 12

12 11.1 Final prototype……………………….………………………..... 20

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

LIST OF TABLES

SL. PAGE
TABLE CONTENT
No No.

1 12.1 Cost estimation………………………………………... 6

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

LIST OF CONTENTS

Sl.No. TITLE Page No

1 INTRODUCTION………………………………………………….1

2 OBJECTIVE………………………………………………………..2

3 SMART GLASS FOR BLIND………….……………………….....3

4 LITERATURE SURVEY……………………………………….….4

5 FUNCTIONAL & NON-FUNCTIONAL SPECIFICATIONS…....5

6 COMPONENTS …………………………………………….……..6

6.1. HARDWARE……………………………………………...6

6.2. SOFTWARE……………………………………………....9

7 DESIGN METHODOLOGY….…………………………………..10

8 EXPERIMENTAL SETUP…………………………………….....11

9 PIN DIAGRAM…………….…….…….……………………........12

10 PROGRAMMING CODE………………………………………...13

11 FINAL PROTOTYPE……………………………………………..20

12 COST ESTIMATE….……………………………………………..21

13 FEATURES………………………………………………….........22

14 FUTURE DEVELOPMENTS……………...…………………......23

15 APPLICATIONS…………………………………………………..24

16 MERITS AND LIMITATIONS……………......……………….....25

16.1. MERITS…………………………………………………25

16.2. LIMITATIONS………………………………………….25

17 CONCLUSION…………………………………………………....26

18 REFERENCE………………………………………………….......27

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 1
INTRODUCTION

In our lives, there are many people who are suffering from different diseases or handicap. There
are an estimated 4.95 million people blind (0.36% of the total population), 35 million people
visually impaired (2.55%), and 0.24 million blind children in India. Cataract and refractive error
remain the leading causes of blindness and visual impairment, respectively, in India. These
people need some help to make their life easier and better. The main goal of "Smart Glasses" is
to help blind people and people who have vision difficulties by introducing a new technology
that makes them able to read the typed text. These glasses are provided with technology to scan
any written text and convert it into audio text. Also, it can detect any object in front of the
person. . The goal of "Smart Glasses" is helping those people in different life aspects. For
example, these glasses effectively helpful in the education field. Blind people and people with
vision difficulties can be able to read, study and learn everything from any printed text images.
"Smart Glasses" encourage blind people or people with vision difficulties to learn and succeed
in many different fields. The main reason for implement "Smart Glasses" for blind people was
to prove for all people that blind people and people with vision difficulties have the chance to
live a normal life with normal people and study in any school or university without the need for
help all the times. By "Smart Glasses", the percentage of educated people will increase.

Dept. of Electrical and Electronics Engineering 1 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 2
OBJECTIVES

Glasses are designed to be the eye for the blind person and people who suffer from vision
difficulties to make their life easier and be able to continue living their life as a normal human to
follow up and achieve their goals and dreams. “Smart Glasses” can read any English text images
by converting the text in the image into an audible one. It also detects any object in front of the
user and heard by using headphones.

• Convert printed text to audio.

• Inform the user by the location of the green zone.

• It makes their life easier and they will be able to live a normal life.

• Increase education level because “Smart Glasses” will help all people with vision difficulties
to study by these glasses with normal people in any school and University.

• Help to the user if there is any obstacle in front of them.

Dept. of Electrical and Electronics Engineering 2 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 3
SMART GLASS FOR BLIND

In our lives, there are many people who are suffering from different diseases or
handicap.These people need some help to make their life easier and better. The main goal of
“Smart Glasses” is to help blind people and people who have vision difficulties by
introducing a new technology that makes them able to read the typed text. These glasses are
provided with technology to scan any written text and convert it into audio text.The goal of
“Smart Glasses” is helping those people in different life aspects. For example, these glasses
effectively helpful in the education field. Blind people and people with vision difficulties can
be able to read, study and learn everything from any printed text images. “Smart Glasses”
encourage blind people or people with vision difficulties to learn and succeed in many
different fields.

The main reason for implement “Smart Glasses” for blind people was to prove for all people
that blind people and people with vision difficulties have the chance to live a normal life with
normal people and study in any school or university without the need for help all the times.

The main aim of the project is to design a voice based alerting system for the blind people.
These glasses use an algorithm to convert spatial and visual information into audio. The
camera will capture pictures of words that the user wants to read and reads the words for the
user via the earpiece. The method that is used to transmit the sound is through cochlear
headphones which improve the safety of the user

Dept. of Electrical and Electronics Engineering 3 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 4
LITERATURE SURVEY

[1] Wang,T.,Wu,D.J.,Coates,A.,&Ng,A.Y.(2012,November).End-to-
endtextrecognitionwithconvolutionalneuralnetworks.InPatternRecognition(ICPR),2012
21stInternationalConferenceon(pp.3304-3308).IEEE.

For all human beings’ vision is primary requirement to survive in fast-changing environment.
Scientific glasses are there to improve the eye sight of visually impaired people. But for blind people
there is a need for smart glass to guide them to be independent. In this paper, we have proposed a
method to support visually challenged people. The existing methods are available to identify
obstacles, stair cases, water proximity separately as well as combined. We have extended the existing
smart aid techniques for obstacle detection with the fire detection and also the background detection.

[2] Koo,H.I.,&Kim,D.H.(2013).Scenetextdetectionviaconnectedcomponentclusteringandnon
textfiltering. IEEEtransactions on imageprocessing,22(6),2296-2305.

The main objective is to develop a smart-glasses that utilize the Internet of Things and Machine
Learning methodologies to aid individuals who are visually impaired or experiencing vision-related
issues, such as Glaucoma, Diabetes-related retinopathy, and other related conditions. The system's
main components are a Pi camera, a Bluetooth or WiFi module and a microcontroller. Using the
input from the camera the system can translate the text into speech so that a blind person can read
like an average human without the need for a braille writing system, A facial recognition system
utilizes a photograph or video to create a mapping of facial features. With audio output, the system
can guide the user, enabling visually impaired individuals to navigate their environment
autonomously, without needing assistance from another person.

[3] Foundation, R. P. (n.d.). Teach, Learn, and Make with Raspberry Pi. Retrieved from
https://www.raspberrypi.org/

Prime examples of smart wearable technologies are ‘Smart Glasses’ and ‘Smart Watches’. The
integration of smart wearables with human well-being is becoming reality these days. The proposed
smart glass is aimed to increase efficiency, productivity, and interlinking computing devices into our
everyday lives by presenting the important information directly in front of his/her eyes for example:
navigation information/directions, presenting critical data in medical operative procedures, gaming
controls etc. Smart-glasses are also becoming useful for visually and hearing-impaired people

Dept. of Electrical and Electronics Engineering 4 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 5
FUNCTIONAL &NON FUNCTIONAL SPECIFICATIONS

5.1 FUNCTIONAL SPECIFICATIONS

Text Identification:
Trigger Mechanism: Activated when the capture button is pressed.
Image Capture: Glasses capture an image using built-in cameras.
Text Conversion: Utilizes optical character recognition to convert text within the captured
image into audio text.
Audio Output: Translates the identified text into audio, providing spoken information to the
user.
Audio Feedback: Provides audio feedback, conveying information about the identified
object, aiding navigation for the visually impaired.

5.2 NON-FUNCTIONAL SPECIFICATIONS

Performance: The glasses must be able to perform properly and extract the text from any
image.
Reliability: The glasses must achieve high accuracy in identifying the text in the image.
Flexibility: The glasses can be used by all people and in different places such as college,
school, hospital and even in the streets.
Scalability: The glasses are designed to take a clear picture of the distance between 40 and
150 cm.
Usability: The glasses are light, safe and easily wearable.

Dept. of Electrical and Electronics Engineering 5 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 6
COMPONENTS

6.1. HARDWARE:

1. GLASS

Fig.6.1.1: Glass

2. CONTOLLER( Raspberry Pi version 5)

Fig.6.1.2: Controller

Raspberry Pi 5 model is the latest release of Raspberry Pi range. In comparison with its
predecessor, it comes with a faster 2.4GHz. Raspberry Pi is a credit card-sized computer. It
needs to be connected with a keyboard, mouse, display, power supply, SD card and installed
operating system. Raspberry Pi is a low-cost embedded system that can do a lot of significant
tasks. It can be run as no-frills PC, a pocketable coding computer, a hub for home made
hardware and more.

Dept. of Electrical and Electronics Engineering 6 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

3. WEBCAM

Fig.6.1.3: Webcam

The webcam has a view angle of 60°with a fixed focus. It can capture images with
maximum resolutions of 1289 x 720 pixels. It is compatible with most operating systems
platforms such as Linux, Windows, and MacOS. It has a USB port and a built-in mono
mic. In the project, the Webcam will be used the eyes of the person who wears the “Smart
Glasses.” The camera is going to capture a picture when the button is pressed, in order to
detect and recognize the text from the image.

4. HEADPHONES

Fig.6.1.4: Headphone

The headphones will be used to help the user listen to the text that is been converted to
audio after it is been captured by the camera or to listen to the translation of the text.
The headphones are going to be small, light and connected to the glasses, the user will
not worry about losing the headphones or bothered by wearing them.

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


77
Smart Glass For Blind Project Report 2024

5. POWER BANK

Fig.6.1.5: Powerbank

The blind students will use “Smart Glasses” all the time during university time and the
way for the battery provided with raspberry not helpful because the student needs to move
from one class to another. With the updated “Smart Glasses”, it uses Power Bank 5V and
2.5 A, and the student able to use the power everywhere in the university or school.

6. SD CARD

Fig.6.1.6: SD card

7. PUSH BUTTON

Fig.6.1.7: Push button

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


8
Smart Glass For Blind Project Report 2024

6.2. SOFTWARE:

1. RASPBIAN
Raspbian is a free operating system based on Debian optimized for the Raspberry Pi
hardware. An operating system is the set of basic programs and utilities that make
your Raspberry Pi run. Raspbian is not affiliated with the Raspberry Pi Foundation.
Raspbian was created by a small, dedicated team of developers that are fans of the
Raspberry Pi hardware, the educational goals of the Raspberry Pi Foundation.

2. OPEN CV
Open CV-Python is a library of Python bindings designed to solve computer vision
problems. Open CV is a vast library that helps in providing various functions for
image and video operations. With Open CV, we can capture a video from the
camera. It lets you create a video capture object which is helpful to capture videos
through webcam and then you may perform desired operations on that video.
Processing a video means, performing operations on the video frame by frame.
Frames are nothing but just the particular instance of the video in a single point of
time. We may have multiple frames even in a single second. Frames can be treated
as similar to an image.

3. gTTS
There are several APIs available to convert text to speech in Python. One of such
APIs is the Google Text to Speech API commonly known as the gTTS API. gTTS is
a very easy to use tool which converts the text entered, into audio which can be
saved as a mp3 file. The gTTS API supports several languages including English,
Hindi, Tamil, French, German and many more. The speech can be delivered in any
one of the two available audio speeds, fast or slow. However, as of the latest update,
it is not possible to change the voice of the generated audio.

Dept. of Electrical and Electronics Engineering 9 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 7
DESIGN METHODOLOGY

Fig 7.1: Flowchart demonstration

Fig 7.2: Initial design

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


10
Smart Glass For Blind Project Report 2024

CHAPTER 8
EXPERIMENTAL SETUP

The user will wear glasses and team members called is “Smart Glasses” connected to
Raspberry Pi Model 5. If the image is between 40-150 cm, the glasses will take a picture of
the image Using a Webcam. Then, using OCR the text will be identified. Finally, using
gTTS the text will be converted to a voice that the user is going to hear through a
headphone.

The Webcam will be used as the eyes of the person who wears the “Smart Glasses.” The
camera is going to capture a picture when the button is pressed, in order to detect and
recognize the text from the image. The headphones will be used to help the user listen to the
text that is been converted to audio after it is been captured by the camera or to listen to the
translation of the text. And also the camera captures the object in front of the person, detects
it and hears it by voice via headphones.

Fig.8.1: Block diagram

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


11
Smart Glass For Blind Project Report 2024

CHAPTER 9
PIN DIAGRAM

One of the powerful features of the Raspberry Pi is the row of GPIO (general-purpose input
output) pins and the GPIO Pinout is an interactive reference to these GPIO pins. There are two
5V pins and two 3V3 pins on the board. It also has several ground pins (0V). All these pins are
unconfigurable. A GPIO pin can be designated as an output pin. The pin set as output pin can be
set to 3V3(high) or 0V(low). A GPIO pin can be designated as an input pin. The pin set as input
pin can be read as 3V3(high) or 0V(low). You can use internal pull-up or pull-down resistors.
The push button is connected to the pin17 and ground.

Fig 9.1: Pin diagram of Raspberry pi 5

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


12
Smart Glass For Blind Project Report 2024

CHAPTER 10
PROGRAMMING CODE

# Import packages
import os
import cv2
import numpy as np
#from picamera.array import PiRGBArray
#from picamera import PiCamera
import tensorflow as tf
import argparse
import sys
import pyttsx3

from gpiozero import Button


from signal import pause
from gtts import gTTS

import cv2
import pytesseract

# Path to Tesseract executable (change this if necessary)


pytesseract.pytesseract.tesseract_cmd = '/usr/bin/tesseract'
button_pin = 17

# Create a Button object representing the button connected to the GPIO pin
button = Button(button_pin)

# Define a function to print the state of the button


def print_button_state():
if button.is_pressed:
print("Button is pressed")
else:
print("Button is not pressed")

print_button_state()
# Set up camera constants
#IM_WIDTH = 1280
#IM_HEIGHT = 720

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


13
Smart Glass For Blind Project Report 2024

IM_WIDTH = 320 # Use smaller resolution for


IM_HEIGHT = 240 # slightly faster framerate

# Select camera type (if user enters --usbcam when calling this script,
# a USB webcam will be used)
camera_type = 'picamera'
parser = argparse.ArgumentParser()
parser.add_argument('--usbcam', help='Use a USB webcam instead of picamera',
action='store_true')
args = parser.parse_args()
if args.usbcam:
camera_type = 'usb'

# This is needed since the working directory is the object_detection folder.


sys.path.append('..')

# Import utilites
from utils import label_map_util
from utils import visualization_utils as vis_util

# Name of the directory containing the object detection module we're using
MODEL_NAME = 'ssdlite_mobilenet_v2_coco_2018_05_09'

# Grab path to current working directory


CWD_PATH = os.getcwd()

# Path to frozen detection graph .pb file, which contains the model that is used
# for object detection.
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,'frozen_inference_graph.pb')

# Path to label map file


PATH_TO_LABELS = os.path.join(CWD_PATH,'data','mscoco_label_map.pbtxt')

# Number of classes the object detector can identify


NUM_CLASSES = 90

## Load the label map.


# Label maps map indices to category names, so that when the convolution
# network predicts `5`, we know that this corresponds to `airplane`.
# Here we use internal utility functions, but anything that returns a
# dictionary mapping integers to appropriate string labels would be fine
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map,
max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

Dept. of Electrical and Electronics Engineering 14 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

# Load the Tensorflow model into memory.


detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.compat.v1.GraphDef()
with tf.io.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')

sess = tf.compat.v1.Session(graph=detection_graph)

# Define input and output tensors (i.e. data) for the object detection classifier

# Input tensor is the image


image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')

# Output tensors are the detection boxes, scores, and classes


# Each box represents a part of the image where a particular object was detected
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')

# Each score represents level of confidence for each of the objects.


# The score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')

# Number of objects detected


num_detections = detection_graph.get_tensor_by_name('num_detections:0')

# Initialize frame rate calculation


frame_rate_calc = 1
freq = cv2.getTickFrequency()
font = cv2.FONT_HERSHEY_SIMPLEX

# Initialize camera and perform object detection.


# The camera has to be set up and used differently depending on if it's a
# Picamera or USB webcam.

# I know this is ugly, but I basically copy+pasted the code for the object
# detection loop twice, and made one work for Picamera and the other work
# for USB.

def ocr_image(image):
return pytesseract.image_to_string(image)

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


15
Smart Glass For Blind Project Report 2024

# Function to convert text to speech


def text_to_speech(text):
engine = pyttsx3.init()
engine.say(text)
engine.runAndWait()

### Picamera ###


if camera_type == 'usb':
# Initialize Picamera and grab reference to the raw capture
camera = PiCamera()
camera.resolution = (IM_WIDTH,IM_HEIGHT)
camera.framerate = 10
rawCapture = PiRGBArray(camera, size=(IM_WIDTH,IM_HEIGHT))
rawCapture.truncate(0)

for frame1 in camera.capture_continuous(rawCapture, format="bgr",use_video_port=True):

t1 = cv2.getTickCount()

# Acquire frame and expand frame dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
frame = np.copy(frame1.array)
frame.setflags(write=1)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_expanded = np.expand_dims(frame_rgb, axis=0)

# Perform the actual detection by running the model with the image as input

(boxes, scores, classes, num) = sess.run(


[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: frame_expanded})

# Draw the results of the detection (aka 'visulaize the results')


vis_util.visualize_boxes_and_labels_on_image_array(
frame,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,

Dept. of Electrical and Electronics Engineering 16 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

line_thickness=8,
min_score_thresh=0.40)

cv2.putText(frame,"FPS:
{0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA)
# All the results have been drawn on the frame, so it's time to display it.

cv2.imshow('Object detector', frame)

t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc = 1/time1

# Press 'q' to quit


if cv2.waitKey(1) == ord('q'):
break

rawCapture.truncate(0)

camera.close()

### USB webcam ###


elifcamera_type == 'picamera':
# Initialize USB webcam feed
camera = cv2.VideoCapture(0)
ret = camera.set(3,IM_WIDTH)
ret = camera.set(4,IM_HEIGHT)

while(True):
if button.is_pressed:

ret, frame = camera.read()

# Convert the frame to grayscale


gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Perform OCR on the grayscale image


text = ocr_image(gray)

# Read each line of text and voice it out


for line in text.split('\n'):
if line.strip() != '':

Dept. of Electrical and Electronics Engineering 17 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

print("Reading:", line)
text_to_speech(line)

cv2.imshow('Original Frame', frame)

# Wait for 'q' key to exit or read another page


if cv2.waitKey(1) & 0xFF == ord('q'):
break

else:
t1 = cv2.getTickCount()

# Acquire frame and expand frame dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
ret, frame = camera.read()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_expanded = np.expand_dims(frame_rgb, axis=0)
# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run([detection_boxes, detection_scores, detection_classes,
num_detections],
feed_dict={image_tensor: frame_expanded})

# Draw the results of the detection (aka 'visulaize the results')


vis_util.visualize_boxes_and_labels_on_image_array(
frame,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8,
min_score_thresh=0.85)

cv2.putText(frame,"FPS:
{0:.2f}".format(frame_rate_calc),(30,50),font,1,(255,255,0),2,cv2.LINE_AA)
name= ([category_index.get(value) for index,value in enumerate(classes[0]) if scores[0,index] >
0.5])
# All the results have been drawn on the frame, so it's time to display it.

Dept. of Electrical and Electronics Engineering 18 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

cv2.imshow('Object detector', frame)


engine = pyttsx3.init()

# print(name)
names = [item['name'] for item in name if item is not None]

new_objects = [obj for obj in names if obj not in prev_spoken_objects]

print(names)
# Speak the names of new objects
for obj in names:
engine.say(obj)
engine.runAndWait()

# Update previously spoken objects


prev_spoken_objects = names

t2 = cv2.getTickCount()
time1 = (t2-t1)/freq
frame_rate_calc = 1/time1

# Press 'q' to quit


if cv2.waitKey(1) == ord('q'):
break

camera.release()

cv2.destroyAllWindows()

Dept. of Electrical and Electronics Engineering 19 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 11
FINAL PROTOTYPE

The idea of the project is to make glasses for blind people that help them in their education
life. The first task is to detect the object or person in front of the user so that the user can go
safely. The second task of the glasses is to scan any text image by pressing a button and
convert it into audio text, so the person will listen to the audio through a headphone that’s
connected to the glasses. The picture will be taken when the user presses the button. For
translating the text, the glasses used Google translation API.

Fig 11.1: Final prototype

Dept. of Electrical and Electronics Engineering 20 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 12
COST ESTIMATE

SL.NO ITEM QUANTITY PRICE

1 Glass 1 170

2 Controller (Raspberry pi 5) 1 9500

3 WebCam 1 3300

4 Pushbutton 1 150

5 Headphone 1 300

6 Powerbank 1 4500

7 USB cable 1 150

TOTAL 18070

Table 12.1: Cost estimation

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


21
Smart Glass For Blind Project Report 2024

CHAPTER 13
FEATURES

“Smart Glasses”are a very helpful device for people with vision difficulties. They can use
this product everywhere all the time and improve their lives. “Smart Glasses” can be
available indifferent fields like education, entertainment, medical and personal use. There
are many people who are not able to continue their study because they have vision
difficulties. So, one of the most important fields is education. The glasses have built-in Wi-
Fi and Bluetooth connectivity and a camera for taking photographs and videos. The smart
eyewear uses motion and voice recognition to process commands from the wearer. A
touchpad is also available on the glasses 'rim .Glasses designed for the blind are more than
just a vision aid; they are a gateway to enhanced independence and connectivity. As
technology evolves, so do the possibilities for those with visual impairments. Glasses for the
blind have transformed significantly, integrating advanced features such as object
recognition and navigation assistance, which go beyond traditional visual aids. This article
explores the innovative world of glasses for the blind, delving into how these smart devices
provide audio descriptions of the visual world, helping users navigate spaces and recognize
faces with ease. With glasses for the blind, every challenge in daily navigation turns into an
opportunity for greater freedom and self-reliance. These cameras capture real-time images of
the wearer's surroundings, enabling blind individuals to perceive their environment through
audio or tactile feedback. Advanced algorithms and computer vision technology enable the
glasses to recognize and identify objects, people, and text. By conveying this information
audibly or through other sensory cues, the glasses provide valuable context to the wearer.

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


22
Smart Glass For Blind Project Report 2024

CHAPTER 14
FUTURE DEVELOPMENTS

“Smart Glasses” can be improved in the future for blind people and people who have vision
difficulties by adding new techniques. For instance, direction and warning messages to prevent
expected accidents, messages to tell the user about the battery level, video detection to provide a
full healthy life for people with vision difficulties, develop mobile application to control “Smart
Glasses”. We also delved into the various types of glasses available for individuals with visual
impairment, including those for correcting refractive errors and maximizing remaining vision.
Glasses can address specific visual needs, enhance visual acuity, and incorporate specialized
lenses or technologies to optimize the remaining vision of visually impaired individuals. One of
the most exciting capabilities of smart glasses is their potential for augmented reality (AR)
experiences. By superimposing digital elements onto the real-world environment, these glasses
allow users to see virtual objects as if they were part of the physical world. This opens up endless
possibilities for gaming, education, training, and other interactive experiences. Additionally, many
smart glasses come equipped with voice control capabilities. This means that users can interact
with their devices hands-free by simply speaking commands or questions out loud. This feature
also enables seamless integration with virtual assistants like Siri or Google Assistant. Lastly,
smart glasses can also have useful features for professionals and workers. For example, some
models provide real-time translation capabilities, making them useful for international business
meetings or travel. Other glasses may offer remote assistance features, allowing experts to guide
technicians or workers remotely through the glasses’ display and voice communication functions.

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


23
Smart Glass For Blind Project Report 2024

CHAPTER 15
APPLICATIONS

It works as a navigation device for the blind people. This system can be used to navigate by
everyone not only visually impaired under certain circumstances, like foggy mornings with low
visibility. This system can also be used by patients suffering with various eye ailments like
cataract, exophthalmia, post eye operative situations and others. This system can be modified
into a more sophisticated version of itself by using high intensity ultrasonic waves to be used
as a navigation system for geological explorations. Smart glass can assist with tasks such as
reading menus and recognizing faces. Its applications extend to education, employment, and
leisure activities, promoting inclusivity and accessibility. Though historically used for vision
improvement and correction, eyewear has also evolved into eye protection, for fashion and
aesthetic purposes, and starting in the late 20th century, computers and virtual reality. The
primary intention of wearing eyewear can vary based on the need or desire of the wearer.
Glasses designed for the blind are more than just a vision aid; they are a gateway to enhanced
independence and connectivity. As technology evolves, so do the possibilities for those with
visual impairments. Glasses for the blind have transformed significantly, integrating advanced
features such as object recognition and navigation assistance, which go beyond traditional
visual aids. This article explores the innovative world of glasses for the blind, delving into how
these smart devices provide audio descriptions of the visual world, helping users navigate
spaces and recognize faces with ease. With glasses for the blind, every challenge in daily
navigation turns into an opportunity for greater freedom and self-reliance.

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 16
MERITS AND LIMITATIONS

16.1 MERITS

▪ Help blind people.

▪ Warn the users if there is something in front of them.

▪ Increase the education level for blind people.

▪ Help to search for more information about any word in the image that the camera
scans it.

16.2 LIMITATIONS

▪ Focusing on software will acquire to use another device such as laptop or mobile to
perform software processing on it.

▪ Focusing on hardware will make the project budget very expensive.

Dept. of Electrical and Electronics Engineering 25 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

CHAPTER 17
CONCLUSION

In conclusion, the advent of smart glass for the visually impaired represents a remarkable
stride in utilizing technology for societal benefit. Smart glass for the blind stands as a
testament to the transformative power of technology in addressing real-world challenges.
This cutting-edge innovation employs advanced sensory technologies to provide the visually
impaired with real-time information about their surroundings. By leveraging features such as
object recognition and navigation assistance, smart glass empowers individuals with visual
disabilities, offering them increased autonomy and a heightened sense of spatial awareness.
The distinct and rapid evolution of this technology reflects the relentless pursuit of solutions
that enhance the quality of life for a diverse range of individuals. From detecting obstacles to
recognizing landmarks, smart glass serves as a valuable tool in navigating the world with
greater ease.
In embracing and implementing such advancements responsibly, we not only harness the
potential for personal success but also contribute to the collective well-being of society. The
success of smart glass for the blind extends beyond individual benefits, influencing societal
perceptions of inclusivity and accessibility. By integrating these technological solutions into
daily life, we take strides toward creating a more equitable and supportive environment for
everyone. Ultimately, the incorporation of smart glass for the visually impaired illustrates the
profound impact that purposeful and well-utilized technology can have on individual lives,
societal progress, and the overall advancement of a nation. It exemplifies the ethos that, even
in the face of obstacles, technology offers us the tools not just to cope but to triumph over
challenges and create a more inclusive and prosperous future for all.

Dept. of Electrical and Electronics Engineering College of Engineering Attingal


26
Smart Glass For Blind Project Report 2024

CHAPTER 18
REFERENCES

[1] Wang,T.,Wu,D.J.,Coates,A.,&Ng,A.Y.(2012,November).End-to-
endtextrecognitionwithconvolutionalneuralnetworks.InPatternRecognition(ICPR),2012
21stInternationalConferenceon(pp.3304-3308).IEEE.

[2] Koo,H.I.,&Kim,D.H.(2013).Scenetextdetectionviaconnectedcomponentclusteringandnon
textfiltering. IEEEtransactions on imageprocessing,22(6),2296-2305.

[3] Bissacco,A.,Cummins,M.,Netzer,Y.,&Neven,H.(2013).Photocard:Readingtextinuncontrolle
dconditions.InProceedingsoftheIEEEInternationalConferenceonComputerVision(pp.785-792).

[4] Yin,X.C.,Yin,X.,Huang,K.,&Hao,H.W.(2014).Robusttextdetectioninnaturalsceneimages.IE
EEtransactionsonpatternanalysisandmachineintelligence,36(5),970-983.

[5] Neumann,L.,&Matas,J.(2012,June).Real-
timescenetextlocalizationandrecognition.InComputerVisionandPatternRecognition(CV
PR),2012IEEEConferenceon(pp.3538-3545).IEEE.

[6] Neumann,L.,&Matas,J.(2013).Scenetextlocalizationandrecognitionwithorientedstroked
etection.InProceedingsoftheIEEEInternationalConferenceonComputerVision(pp.97-104).

[7] Zhou,X.,Ylao,C.,Wen,H.,Wang,Y.,Zhou,S.,He,W.,&Liang,J.(2017).EAST:anefficientandac
curatescenetextdetector.InProceedingsoftheIEEEconferenceonComputerVisionandPatternRec
ognition(pp.5551-5560).

[8] Foundation, R. P. (n.d.). Teach, Learn, and Make with Raspberry Pi. Retrieved from
https://www.raspberrypi.org/

[9] Mitchell, B. (2019, March 15). 802.11 WiFi Standards Explained. Retrieved from
https://www.lifewire.com/wireless-standards-802-11a-802-11b-g-n-and-802-11ac-816553

[10] Gudino, M. (2018, March 6). Raspberry Pi 3 vs. Raspberry Pi Zero W. Retrieved
fromhttps://www.arrow.com/en/research-and-events/articles/raspberry-pi-3-vs-raspberry-pi-
zerow

[11] Savvides, L. (2017, February 15). This 'Star Trek'-like headset helps the legally blind
see again. Retrieved from https://www.cnet.com/news/esight-video-glasses-restores-
sightlegally-blind-star-trek-visor-headset/

Dept. of Electrical and Electronics Engineering 27 College of Engineering Attingal


Smart Glass For Blind Project Report 2024

[12] News and blogs by author. (2019, April 30). Retrieved from
https://www.abilitynet.org.uk/news-blogs/three-cool-smart-glasses-help-people-who-areblind-
or-have-sightRay, K., & Ray, K. (2017, July 24). Why I'm Excited about the Future of Aira.
Retrieved from https://medium.com/aira-io/why-im-excited-about-the-future-of-aira-
79be882fc34e

[13] Google. (2019, April 15). Retrieved from https://en.wikipedia.org/wiki/Google_

[14] Eyesynth. (2019). Inicio - Eyesynth. [online] Available at: https://eyesynth.com/


[Accessed 99Jan. 2019].

[15] Detection based on "ultrasonic"What is an Ultrasonic Sensor? (n.d.). Retrieved from


https://www.keyence.com/ss/products/sensor/sensorbasics/ultrasonic/info/

[16] Martin, T. (2016, May 20). How to setup Bluetooth on a Raspberry Pi 3. Retrieved
from https://www.cnet.com/how-to/how-to-setup-bluetooth-on-a-raspberry-pi-3/

[17] Rosebrock, A. (2018). Install OpenCV 4 on your Raspberry Pi - PyImageSearch.


27
[online] PyImageSearch. Available at: https://www.pyimagesearch.com/2018/09/26/install-
opencv-4- on-your-raspberry-pi/ [Accessed 2 Feb. 2019].

[18] OpenCV. (2019, March 12). Retrieved from https://en.wikipedia.org/wiki/OpenCV

[19] Tabbara, K. F., & Ross-Degnan, D. (1986, June 27). Blindness in Saudi Arabia.
Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/3712697

[20] Raspberry Pi Rfid Reader- How To Integrate Rfid With Raspberry Pi 67


https://www.deviceplus.com/connect/integrate-rfid-module-raspberry-pi/

[21] Hc-sr04 Ultrasonic Range Sensor on the Raspberry Pi

[22] ModMyPi LTD - https://www.modmypi.com/blog/hc-sr04-ultrasonic-range-sensor-


onthe-raspberry-pi OpencvOcr and Text Recognition with Tesseract
https://www.pyimagesearch.com/2018/09/17/opencv-ocr-and-text-recognition-withtesseract/

[23] Chaudhuri, A., Mandaviya, K., Badelia, P., & Ghosh, S. K. (2017). Optical character
recognition systems for different languages with soft computing. Springer.

[24] What Is a Radio Frequency Identification Reader (rfid Reader)? - Definition from
Techopediahttps://www.techopedia.com/definition/26992/radio-frequency-identification-
readerrfid-reader

Dept. of Electrical and Electronics Engineering 28 College of Engineering Attingal

You might also like