KEMBAR78
Complete Computer Graphics Notes - BCA - (V) Sem | PDF | Printer (Computing) | Liquid Crystal Display
0% found this document useful (0 votes)
185 views83 pages

Complete Computer Graphics Notes - BCA - (V) Sem

The document is a study material for B.C.A Vth semester on Computer Graphics, covering various topics such as applications, scan conversion, two-dimensional transformations, clipping, and multimedia. It details the applications of computer graphics in art, education, training, and entertainment, along with the types of input and output devices used in graphics. Additionally, it discusses graphics software standards and the significance of output primitives in creating and manipulating images.

Uploaded by

togeca3613
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
185 views83 pages

Complete Computer Graphics Notes - BCA - (V) Sem

The document is a study material for B.C.A Vth semester on Computer Graphics, covering various topics such as applications, scan conversion, two-dimensional transformations, clipping, and multimedia. It details the applications of computer graphics in art, education, training, and entertainment, along with the types of input and output devices used in graphics. Additionally, it discusses graphics software standards and the significance of output primitives in creating and manipulating images.

Uploaded by

togeca3613
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

STUDY MATERIAL FOR MDU

B.C.A Vth SEM.


COMPUTER GRAPHICS

UNIT CONTENT PAGE


NO
I APPLICATIONS OF COMPUTER GRAPHICS 02

II SCAN CONVERSION 22

III TWO DIMENSIONAL TRANSFORMATIONS 36

IV CLIPPING 55

V MULTIMEDIA 74

Prepared by Anurag Patel Page 1 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Prepared by Anurag Patel Page 2 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

UNIT - I
APPLICATIONS OF COMPUTER GRAPHICS

Applications of Computer Graphics:


Computer graphics deals with creation, manipulation and storage of different typeof
images and objects.

Some of the applications of computer graphics are:


Computer Art:
Using computer graphics we can create fine and commercial art which include
animation packages, paint packages. These packages provide facilities for designing object
shapes and specifying object motion.Cartoon drawing,paintings, logo design can also be done.

Computer Aided Drawing:


Designing of buildings, automobile, aircraft is done with the help of computer aided
drawing, this helps in providing minute details to the drawing and producing more accurate and
sharp drawings with better specifications.

Presentation Graphics:
For the preparation of reports or summarizing the financial, statistical, mathematical,
scientific, economic data for research reports, managerial reports, moreover creation of bar
graphs, pie charts, time chart, can be doneusing the tools present in computer graphics.

Entertainment:
Computer graphics finds a major part of its utility in the movie industry and game
industry. Used for creating motion pictures, music video, television shows, cartoon animation
films. In the game industry where focus and interactivity are the key players, computer graphics
helps in providing such features in the efficient way.

Education:
Computer generated models are extremely useful for teaching huge number of
concepts and fundamentals in an easy to understand and learn manner. Using computer
graphics many educational models can be created through which more interest can be
generated among the students regarding the subject.

Training:
Specialized system for training like simulators can be used for training the candidates in
a way that can be grasped in a short span of time with better understanding. Creation of
training modules using computer graphics is simple and very useful.

Graphic operations:
 A general purpose graphics package provides user with Varity of function for creating
and manipulating pictures.
 The basic building blocks for pictures are referred to as output primitives. They includes
character, string, and geometry entities such as point, straight lines, curved lines, filled
areas and shapes defined with arrays of colorpoints.
 Input functions are used for control & process the various input device such as mouse,
tablet,etc.

Prepared by Anurag Patel Page 3 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

 Control operations are used to controlling and housekeeping tasks such as clearing
display screenetc.
 All such inbuilt function which we can use for our purpose are known as
graphicsfunction

Software Standard
 Primary goal of standardize graphics software is portability so that it can be used in any
hardware systems & avoid rewriting of software program for differentsystem
 Some of these standards are discussed below

Graphical Kernel System (GKS)


 This system was adopted as a first graphics software standard by the international
standard organization (ISO) and various national standard organizations includingANSI.
 GKS was originally designed as the two dimensional graphics package and then later
extension was developed for threedimensions.

PHIGS (Programmer’s Hierarchical Interactive Graphic Standard)


 PHIGS is extension of GKS. Increased capability for object modeling, color specifications,
surface rendering, and picture manipulation are provided inPHIGS.
 Extension of PHIGS called “PHIGS+” was developed to provide three dimensional surface
shading capabilities not available inPHIGS.

Output primitives
Points and lines
 Point plotting is done by converting a single coordinate position furnished by an
application program into appropriate operations for the output device inuse.
 Line drawing is done by calculating intermediate positions along the line path between
two specified endpointpositions.
 The output device is then directed to fill in those positions between the endpoints with
somecolor.
 For some device such as a pen plotter or random scan display, a straight line can be
drawn smoothly from one end point together.
 Digital devices display a straight line segment by plotting discrete points between the
twoendpoints.
 Discrete coordinate positions along the line path are calculated from the equation of
theline.
 For a raster video display, the line intensity is loaded in frame buffer at the
corresponding pixel positions.
 Reading from the frame buffer, the video controller then plots the screenpixels.
 Screen locations are referenced with integer values, so plotted positions may only
approximate actual line positions between two specifiedendpoints.
 For example line position of (12.36, 23.87) would be converted to pixel position (12,24).
 Thisroundingofcoordinatevaluestointegerscauseslinestobedisplayed
 Withastairstepappearance(“the jaggies”), as represented in fig 2.1.

Prepared by Anurag Patel Page 4 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Fig. 2.1: − Stair step effect produced when line is generated as a series of pixel positions.
 The stair step shape is noticeable in low resolution system, and we can improve their
appearance somewhat by displaying them on high resolutionsystem.
 More effective techniques for smoothing raster lines are based on adjusting pixel
intensities along the linepaths.
 For raster graphics device−level algorithms discuss here, object positions are specified
directly in integer devicecoordinates.
 Pixel position will referenced according to scan−line number and column number which
is illustrated by followingfigure.Pixel positions referenced by scan−line number and
column number.
 To load the specified color into the frame buffer at a particular position, we will assume
we have available low−level procedure of the form setpixel(x,y).

 Similarly for retrieve the current frame buffer intensity we assume to have procedure
getpixel(x,y).

2
0 1 2 3 4 5 6

Graphics Packages:
There are mainly two types of graphicspackages:
1. General programmingpackage
2. Special−purpose applicationpackage

General programming package


 A general programming package provides an extensive set of graphics function that can
be used in high level programming language such as C orFORTRAN.
 It includes basic drawing element shape like line, curves, polygon, color of element
transformation etc.
 Example: − GL (GraphicsLibrary).

Special-purpose application package


Special−purpose application package are customize for particular application which
implement required facility and provides interface so that user need not to worry about how it
will work (programming). User can simply use it by interfacing withapplication.
Example: − CAD, medical and businesssystems.

Input Devices:

Prepared by Anurag Patel Page 5 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

The Input Devices are the hardware that is used to transfer transfers input to the
computer. The data can be in the form of text, graphics, sound, and text. Output
device display data from the memory of the computer. Output can be text, numeric data, line,
polygon, and other objects.

These Devices include:


1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image Scanner

Keyboard:
The most commonly used input device is a keyboard. The data is entered by pressing
the set of keys. All keys are labelled. A keyboard with 101 keys is called a QWERTY
keyboard.The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.

1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3 ...F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and usedfor fast entry of
numeric data.

Function of Keyboard:
1. Alphanumeric Keyboards are used in CAD. (Computer Aided Drafting)
2. Keyboards are available with special features line screen co-ordinates entry, Menu
selection or graphics functions, etc.
3. Special purpose keyboards are available having buttons, dials, and switches. Dials are
used to enter scalar values. Dials also enter real numbers. Buttons and switches are
used to enter predefined function values.

Mouse:
A Mouse is a pointing device and used to position the pointer on the screen. It is a small
palm size box. There are two or three depression switches on the top. The movement of the
mouse along the x-axis helps in the horizontal movement of the cursor and the movement

Prepared by Anurag Patel Page 6 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

along the y-axis helps in the vertical movement of the cursor on the screen. The mouse cannot
be used to enter text. Therefore, they are used in conjunction with a keyboard.

Trackball
It is a pointing device. It is similar to a mouse. This is mainly used in notebook or laptop
computer, instead of a mouse. This is a ball which is half inserted, and by changing fingers on
the ball, the pointer can be moved.

Space ball:
It is similar to trackball, but it can move in six directions where trackball can move in
two directions only. The movement is recorded by the strain gauge. Strain gauge is applied with
pressure. It can be pushed and pulled in various directions.The ball has a diameter around 7.5
cm. The ball is mounted in the base usingrollers. One-third of the ball is an inside box, the rest
is outside.

Applications:
1. It is used for three-dimensional positioning of the object.
2. It is used to select various functions in the field of virtual reality.
3. It is applicable in CAD applications.
4. Animation is also done using spaceball.
5. It is used in the area of simulation and modeling.

Joystick:
A Joystick is also a pointing device which is used to change cursor position on a monitor
screen. Joystick is a stick having a spherical ball as it’s both lower and upper ends as shown in
fig.

The lower spherical ball moves in a socket. The joystick can be changed in all four
directions. The function of a joystick is similar to that of the mouse. It is mainly used in
Computer Aided Designing (CAD) and playing computer games.

Light Pen:
Light Pen (similar to the pen) is a pointing device which is used to select adisplayed
menu item or draw pictures on the monitor screen.It consists of a photocell and an optical
system placed in a small tube. When its tip is moved over the monitor screen, and pen button is
pressed, its photocell sensing element detects the screen location and sends the corresponding
signals to the CPU.

Prepared by Anurag Patel Page 7 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Uses:
1. Light Pens can be used as input coordinate positions by providing necessary
arrangements.
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics system.
4. It can be used as stroke input devices.
5. It can be used as valuators

Digitizers:

The digitizer is an operator input device, which contains a large, smooth board (the
appearance is similar to the mechanical drawing board) & an electronic trackingdevice, which
can be changed over the surface to follow existing lines. The electronic tracking device contains
a switch for the user to record the desire x & y coordinate positions. The coordinates can be
entered into the computer memory or stored or an off-line storage medium such as magnetic
tape.
Touch Panels:
 Touch Panels is a type of display screen that has a touch-sensitivetransparent panel
covering the screen. A touch screen registers input when a finger or other object comes
in contact with the screen.
 When the wave signals are interrupted by some contact with the screen, that located is
recorded.
 Touch screens have long been used in military applications.

Voice Systems (Voice Recognition):


 Voice Recognition is one of the newest, most complex input techniques used to interact
with the computer. The user inputs data by speaking into a microphone. The simplest
form of voice recognition is a one-word command spoken by one person. Each
command is isolated with pauses between the words.
 Voice Recognition is used in some graphics workstations as input devices to accept voice
commands. The voice-system input can be used to initiategraphics operations or to

Prepared by Anurag Patel Page 8 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

enter data. These systems operate by matching an input against a predefined dictionary
of words and phrases.

Image Scanner
 It is an input device. The data or text is written on paper. The paper is fed to scanner.
The paper written information is converted into electronic format; this format is stored
in the computer. The input documents can contain text, handwritten material, picture
extra.
 By storing the document in a computer document became safe for longer period of
time. The document will be permanently stored for the future. We can change the
document when we need. The document can be printed when needed.
 Scanning can be of the black and white or colored picture. On stored picture 2D or 3D
rotations, scaling and other operations can be applied.

Output Devices in Computer Graphics:


Printers:
A printer is a peripheral device which is used to represent the graphics or text on paper.
The quality is measured by its resolution. The resolution of any printer is measured in dot per
inch (dpi).
The printer usually works with the computer and connected via a cable. In present,
many digital device support printer features so that we can use Bluetooth, Wi-fi, and cloud
technology to print.

Some types of printers are:


 Impact Printers
 Non-impact Printers 

Impact Printers
In impact printers, there is a physical contact established between the print head,
ribbon, ink-cartridge, and paper.
The printers hit print head on an ink-filled ribbon than the letter prints on the paper.
Impact printers are works like a typewriter.
These printers have three types:
 Daisy Wheel Printers 
 Drum Printers 
 Dot Matrix Printer

Prepared by Anurag Patel Page 9 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Types of Printers:

Daisy Wheel Printers:


By these, we can print only one character at a time. The head of this printer looks like a
daisy flower, with the printing arms that appear like petals of a flower; that’s why it is called
“Daisy printer.”It can print approx. 90 characters per second.

Daisy Wheel Printer

Daisy Wheel Printer


Daisy wheel printers are used to print the professional quality document. It is also
called “Letter Quality Printer.”

Advantages:
1. More reliable
2. Better printing Quality
Disadvantages:
1. Slow than Dot Matrix
2. More Expensive
3. Noisy in operation

Prepared by Anurag Patel Page 10 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Drum Printers:
It has a shape like a drum, so it is called “Drum Printer.” This type of printercontains
many characters that are printed on the drum.The surface of the drum is break down into the
number of tracks. Total tracks are equal to character132. A drum will have 132 tracks. The
number of tracks is divided according to the width of the paper.It can print approx. 150-2500
lines per minute.

Drum Printer
Advantages:
1. High Speed
2. Low Cost

Disadvantages:
1. Poor Printing Quality
2. Noisy in Operation

Dot Matrix Printer:


It is also known as the “Impact Matrix Printer.” Dot Matrix Printer can print only one
character at a time. The dot matrix printer uses print heads consisting of 9to 24 pins. These pins
are used to produce a pattern of dots on the paper to create a separate character.Dot-matrix
printer can print any shapes of character, special character, graphs, and charts.

Dot Matrix Printer

Prepared by Anurag Patel Page 11 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Advantages:
1. Low Printing Cost
2. Large print size
3. Long Life

Disadvantages:
1. Slow speed
2. Low Resolution

Non-impact Printers
In Non-impact printers, there is no physical contact between the print head or paper head. A
non-impact printer prints a complete page at a time. The Non-impact printers spray ink on the
paper through nozzles to form the letters and patterns.The printers that print the letters
without the ribbon and on papers are called Non-impact printer. Non-impact printers are also
known as “Page Printer.”

These printers have two types:


1. Inkjet Printer
2. Laser Printer

1. Inkjet Printer:
It is also called “Deskjet Printer.” It is a Non-impact printer in which the letters and
graphics are printed by spraying a drop of ink on the paper with nozzle head.

A Color inkjet printer has four ink nozzles, sapphire, red, yellow, and black, so it is also
called CMYK printer. We can produce any color by using these four colors.The prints and
graphics of this printer are very clear. These printers are generally used for home purposes.

Inkjet Printer
Advantages:
1. High-Quality Printout
2. Low noise
3. High Resolution

Disadvantages:
1. Less Durability of the print head
2. Not suitable for high volume printing
3. Cartridges replacement is expensive

Prepared by Anurag Patel Page 12 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

2. Laser Printer:
It is also called “Page Printer” because a laser printerprocess and store the whole page
before printing it. The laser printer is used to produce high-quality images and text. Mostly it is
used with personal computers. The laser printers are mostly preferred to print a large amount
of content on paper.

Laser printer
Advantages:
1. High Resolution
2. High printing Speed
3. Low printing Cost

Disadvantages:
1. Costly than an inkjet printer
2. Larger and heavier than an inkjet printer

Plotters:
A plotter is a special type of output device. It is used to print large graphs, large designs
on a large paper. For Example: Construction maps, engineering drawings, architectural plans,
and business charts, etc.It was invented by “Remington rand” in 1953.It is similar to a printer,
but it is used to print vector graphics.
Types of Plotter:

Prepared by Anurag Patel Page 13 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

1. Flatbed Plotter:
In a flatbed plotter, the paper is kept in a stationary position on a table or a tray. A
flatbed plotter has more than one pen and a holder. The pen rotates on the paper upside-down
and right-left by the using of a motor.Every pen has a different color ink, which is used to draw
the multicolor design.We can quickly draw the following designs by using a flatbed printer.

For Example: Cars, Ships, Airplanes, Dress design, road and highway blueprints,etc.

A Flatbed Plotter
Advantages of Flatbed Plotter
1. Larger size paper can be used
2. Drawing Quality is similar to an expert

Disadvantages of Flatbed Plotter


1. Slower than printers
2. More Expensive than printers
3. Do not produce high-Quality text printouts

Drum Plotter:
It is also called “Roller plotter.” There is a drum in this plotter.We can apply the paper on
the drum. When the plotter works, these drums moves back and forth, and the image is drawn.Drum
plotter has more than one pen and penholders. The pens easily moves right to left and left to right.The
movement of pens and drums are controlled by graph plotting program.It is used in industry to produce
large drawings (up to A0).

A Drum Plotter

Prepared by Anurag Patel Page 14 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Advantages of Drum Plotter:


1. Draw Larger Size image
2. We can print unlimited length of the image

Disadvantages of Drum Plotter:


1. Very costly

Visual Display Devices:


The primary output device in a graphics system is a video monitor. Although many
technologies exist, but the operation of most video monitors is based on the standard Cathode
Ray Tube (CRT) design.

Cathode Ray Tubes (CRT):


A cathode ray tube (CRT) is a specialized vacuum tube in which images are produced
when an electron beam strikes a phosphorescent surface.It modulates, accelerates, and
deflects electron beam(s) onto the screen to create the images. Most desktop computer
displays make use of CRT for image displaying purposes.

Construction of a CRT:
1. The primary components are the heated metal cathode and a control grid.
2. The heat is supplied to the cathode (by passing current through the filament).
This way the electrons get heated up and start getting ejected out of the cathode
filament.
3. This stream of negatively charged electrons is accelerated towards the phosphor screen
by supplying a high positive voltage.
4. This acceleration is generally produced by means of an acceleratinganode.

5. Next component is the Focusing System, which is used to force the electron beam to
converge to small spot on the screen.
6. If there will not be any focusing system, the electrons will be scatteredbecause of their
own repulsions and hence we won’t get a sharp image of the object.
7. This focusing can be either by means of electrostatic fields or magnetic fields.

Prepared by Anurag Patel Page 15 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Types of Deflection:
1. Electrostatic Deflection:
The electron beam (cathode rays) passes through a highly positively charged
metal cylinder that forms an electrostatic lens. This electrostatic lens focuses the cathode rays
to the center of the screen in the same way like an optical lens focuses the beam of light. Two
pairs of parallel plates are mounted inside the CRT tube.

2. Magnetic Deflection:
Here, two pairs of coils are used. One pair is mounted on the top and bottom of the CRT
tube, and the other pair on the two opposite sides. The magnetic field produced by both these
pairs is such that a force is generated on the electron beam in a direction which is
perpendicular to both the direction of magnetic field, and to the direction of flow of the beam.
One pair is mounted horizontally and the other vertically.

 Different kinds of phosphors are used in a CRT. The difference is basedupon the time for
howlong the phosphor continues to emit light after the CRT beam has been removed.
This property is referred to as Persistence.
 The number of points displayed on a CRT is referred to as resolutions (eg. 1024x768).

Raster-Scan
 The electron beam is swept across the screen one row at a time from top to bottom. As
it moves across each row, the beam intensity is turned on and off to create a pattern of
illuminated spots. This scanning process is called refreshing.
 Each complete scanning of a screen is normally called a frame. Therefreshing rate, called
the frame rate, is normally 60 to 80 frames per second, or described as 60 Hz to 80 Hz.
 Picture definition is stored in a memory area called the frame buffer.
 This frame buffer stores the intensity values for all the screen points. Each screen point
is called a pixel (picture element or pel).
 On black and white systems, the frame buffer
storing the values of the pixels is called a bitmap. Each entry in the bitmap is a 1-bit data
which determine the on (1) and off (0) of the intensity of the pixel.

Prepared by Anurag Patel Page 16 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

On color systems, the frame buffer storing the values of the pixels is called a
pixmap(Though nowadays many graphics libraries name it as bitmap too). Each entry in the
pixmap occupies a number of bits to represent the color of the pixel. For a true color display,
the number of bits for each entry is 24 (8 bits per red/green/blue channel, each channel 28
=256 levels of intensity value, ie. 256 voltage settings for each of the red/green/blue electron
guns).

Random-Scan (Vector Display) or stroke-writing or calligraphic displays:


The CRT's electron beam is directed only to the parts of the screen where a picture is to be
drawn. The picture definition is stored as a set of line-drawing commands in a refreshdisplay
file or a refresh buffer in memory.

Random-scan generally have higher resolution than raster systems and can produce
smooth line drawings, however it cannot display realistic shaded scenes.

Color CRT Monitors:


The CRT Monitor display by using a combination of phosphors. The phosphors are different
colors. There are two popular approaches for producing color displays with a CRT are:

1. Beam Penetration Method


2. Shadow-Mask Method

1. Beam Penetration Method:


The Beam-Penetration method has been used with random-scan monitors. In this
method, the CRT screen is coated with two layers of phosphor, red and green and the displayed

Prepared by Anurag Patel Page 17 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

color depends on how far the electron beam penetrates the phosphor layers. This method
produces four colors only, red, green, orange and yellow. A beam of slow electrons excites the
outer red layer only; hence screen shows red color only. A beam of high-speed electrons excites
the inner green layer. Thus screen shows a green color.
Advantages:
1. Inexpensive

Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.

2. Shadow-Mask Method:
 Shadow Mask Method is commonly used in Raster-Scan System because they produce a
much wider range of colors than the beam-penetration method.
 It is used in the majority of color TV sets and monitors.

Construction:
A shadow mask CRT has 3 phosphor color dots at each pixel position.
 One phosphor dot emits: red light
 Another emits: green light
 Third emits: blue light

This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just
behind the phosphor coated screen.Shadow mask grid is pierced with small round holes in a
triangular pattern.

Working:
 The deflection system of the CRT operates on all 3 electron beams simultaneously; the 3
electron beams are deflected and focused as a group onto the shadow mask, which
contains a sequence of holes aligned with the phosphor- dot patterns.
 When the three beams pass through a hole in the shadow mask, they activate a
dottedtriangle, which occurs as a small color spot on the screen.
 The phosphor dots in the triangles are organized so that each electron beam can
activate only its corresponding color dot when it passes through the shadow mask.

Advantage:
1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible

Disadvantage:
1. Relatively expensive compared with the monochrome CRT.
2. Relatively poor resolution
3. Convergence Problem

Direct View Storage Tubes:


DVST terminals also use the random scan approach to generate the image on the CRT
screen. The term "storage tube" refers to the ability of the screen to retain the image which has
been projected against it, thus avoiding the need to rewrite the image constantly.

Prepared by Anurag Patel Page 18 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Prepared by Anurag Patel Page 19 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Function of guns:
Two guns are used in DVST:
1. Primary guns: It is used to store the picture pattern.
2. Flood gun or Secondary gun: It is used to maintain picture display.

Advantage:
1. No refreshing is needed.
2. High Resolution
3. Cost is very less

Disadvantage:
1. It is not possible to erase the selected part of a picture.
2. It is not suitable for dynamic graphics applications.
3. If a part of picture is to modify, then time is consumed.

Flat Panel Display:


The Flat-Panel display refers to a class of video devices that have reduced volume,
weight and power requirement compare to CRT.

Example: Small T.V. monitor, calculator, pocket video games, laptop computers, an
advertisement board in elevator.

Emissive Display:
The emissive displays are devices that convert electrical energy into light.Examples are
Plasma Panel, thin film electroluminescent display and LED (Light Emitting Diodes).

Non-Emissive Display:
The Non-Emissive displays use optical effects to convert sunlight or light from some
other source into graphics patterns. Examples are LCD (Liquid Crystal Device).

Plasma Panel Display:


Plasma-Panels are also called as Gas-Discharge Display. It consists of an array of small
lights. Lights are fluorescent in nature.

The essential components of the plasma-panel display are:


1. Cathode: It consists of fine wires. It delivers negative voltage to gas cells. The voltage is
released along with the negative axis.
2. Anode: It also consists of line wires. It delivers positive voltage. The voltage is supplied
along positive axis.
3. Fluorescent cells: It consists of small pockets of gas liquids when the voltage is applied
to this liquid (neon gas) it emits light.
4. Glass Plates: These plates act as capacitors. The voltage will be applied, the cell will
glow continuously.The gas will slow when there is a significant voltage difference
between horizontal and vertical wires. The voltage level is kept between 90 volts to 120
volts. Plasma level does not require refreshing. Erasing is done by reducing the voltage
to 90 volts.

Prepared by Anurag Patel Page 20 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Each cell of plasma has two states, so cell is said to be stable. Displayable point in plasma panel
is made by the crossing of the horizontal and vertical grid. The resolution of the plasma panel
can be up to 512 * 512 pixels.
Advantage:
1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display
Disadvantage:
1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.

LED (Light Emitting Diode):


 In an LED, a matrix of diodes is organized to form the pixel positions in the display and
picture definition is stored in a refresh buffer. Data is read from the refresh buffer and
converted to voltage levels that are applied to the diodes to produce the light pattern in
the display.
 LCD (Liquid Crystal Display):Liquid Crystal Displays are the devices that produce a
picture by passing polarized light from the surroundings or from an internal light source
througha liquid-crystal material that transmits the light.
 LCD uses the liquid-crystal material between two glass plates; each plate is the right
angle to each other between plates liquid is filled. One glass plate consists of rows of
conductors arranged in vertical direction. Another glass plate is consisting of a row of
conductors arranged in horizontal direction. The pixel position is determined by the
intersection of the vertical & horizontal conductor. This position is an active part of the
screen.
 Liquid crystal display is temperature dependent. It is between zero to seventy degree
Celsius. It is flat and requires very little power to operate.

Advantage:
1. Low power consumption.
2. Small Size
3. Low Cost

Disadvantage:
1. LCDs are temperature-dependent (0-70°C)
2. LCDs do not emit light; as a result, the image has very little contrast.
3. LCDs have no color capability.
4. The resolution is not as good as that of a CRT.

Prepared by Anurag Patel Page 21 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Differences between Raster Scan Display and Random Scan


Display:

Base of
Raster Scan System Random Scan System
Differences
Theelectronbeamissweptacrossthes The electron beam is directed only to
Electron
creen,one row at a time, from top thepartsof screen where a picture is to be
Beam to bottom. drawn.
Its resolution is poor because raster Its resolution is good because
systemincontrast produces zig-zag thissystemproduces smooth lines drawings
Resolution lines that are plotted as discrete because CRT beam directly follows the line
pointsets. path.
Picturedefinitionisstoredasasetofint Picture definition is stored as a set of line
Picture ensityvalues for all screen points, drawing instructions in a display file.
Definition called pixels in a refresh buffer area.

The capability of this system to


store intensity values for pixel These systems are designed for line-drawing
makes it well suited for the realistic and can’t display realistic shadedscenes.
Realistic Display
display of scenes contain shadow
andcolor pattern.
Screen points/pixels are used to Mathematicalfunctionsareusedtodrawn
Draw an image draw an image. animage.

Prepared by Anurag Patel Page 22 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

UNIT - II
SCAN CONVERSION
Definition
It is a process of representing graphics objects a collection of pixels. The graphics objects
are continuous. The pixels used are discrete. Each pixel can have either on or off state.

The circuitry of the video display device of the computer is capable of converting binary
values (0, 1) into a pixel on and pixel off information. 0 is represented bypixel off. 1 is
represented using pixel on. Using this ability graphics computer represent picture having
discrete dots

Any model of graphics can be reproduced with a dense matrix of dots or points. Most
human beings think graphics objects as points, lines, circles, ellipses. For generating graphical
object, many algorithms have been developed.

Advantage of developing algorithms for scan conversion:


1. Algorithms can generate graphics objects at a faster rate.
2. Using algorithms memory can be used efficiently.
3. Algorithms can develop a higher level of graphical objects.

Examples of objects which can be scan converted


1. Point
2. Line
3. Sector
4. Arc
5. Ellipse
6. Rectangle
7. Polygon
8. Characters
9. Filled Regions

The process of converting is also called as rasterization. The algorithms implementation


varies from one computer system to another computer system. Some algorithms are
implemented using the software. Some are performed using hardware or firmware. Some are
performed using various combinations of hardware, firmware, and software.

DDA Line Algorithm

DDA stands for Digital Differential Analyzer. It is an incremental method of scan


conversion of line. In this method calculation is performed at each step but by using results of
previous steps. Digital Differential Analyzer algorithm is the simple line generation algorithm
which is explained step by step here.Suppose at step i, the pixels is (xi,yi)

The line of equation for step i


yi=mxi+b ................................... equation 1
Next value will be
yi+1=mxi+1+b. .............. equation 2

Prepared by Anurag Patel Page 23 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

m=
yi+1-yi=∆y ..................... equation 3
yi+1-xi=∆x. ................... equation 4
yi+1=yi+∆y
∆y=m∆x
yi+1=yi+m∆x
∆x=∆y/m
xi+1=xi+∆x
xi+1=xi+∆y/m
Case1: When |m|<1 then (assume that x1<x2)
x= x1,y=y1 set ∆x=1
yi+1=y1+m, x=x+1
Until x = x2
Case2: When |m|>1 then (assume that y1<y2)
x= x1,y=y1 set ∆y=1

xi+1= , y=y+1

Until y → y2
Advantage:
1. It is a faster method than method of using direct use of line equation.
2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice
is not possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.

Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations of round off
error cause accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suitedfor
hardware implementation.
DDA Algorithm:
Step1: Start Algorithm
Step2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.
Step3: Enter value of x1,y1,x2,y2.
Step4: Calculate dx = x2-x1
Step5: Calculate dy = y2-y1

Prepared by Anurag Patel Page 24 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Step6: If ABS (dx) > ABS (dy)


Then step = abs (dx) Else
Step7:xinc=dx/step
yinc=dy/step
assign x = x1
assign y = y1
Step8: Set pixel (x, y)
Step9: x = x + xinc
y = y + yinc
Set pixels (Round (x), Round (y))
Step10: Repeat step 9 until x = x2
Step11: End Algorithm
Example: If a line is drawn from (2, 3) to (6, 15) with use of DDA. How many points will needed to
generate such line?
Solution: P1 (2,3) P11 (6,15)
x1=2
y1=3
x2= 6
y2=15
dx = 6 - 2 = 4
dy = 15 - 3 = 12

m=

For calculating next value of x takes x = x +

Prepared by Anurag Patel Page 25 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Bresenham's Line Algorithm


This algorithm is used for scan converting a line. It was developed by Bresenham. It is
an efficient method because it involves only integer addition, subtractions, and multiplication
operations. These operations can be performed very rapidly so lines can be generated
quickly.In this method, next pixel selected is that one who has the least distance from true line.

The method works as follows:


Assume a pixel P1'(x1',y1'),then select subsequent pixels as we work our way to the night,
one pixel position at a time in the horizontal direction toward P2'(x2',y2').
Once a pixel in choose at any step.

Prepared by Anurag Patel Page 26 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

The next pixel is


1. Either the one to its right (lower-bound for the line)
2. One top its right and up (upper-bound for the line). The line is best approximated by
those pixels that fall the least distance from the path between P 1',P2'.To chooses the
next onebetween the bottom pixel S and top pixel T.

3. If S is chosen
We have xi+1=xi+1 and yi+1=yi
If T is chosen
We have xi+1=xi+1 and yi+1=yi+1

The actual y coordinates of the line at x = xi+1is


y=mxi+1+b

The distance from S to the actual line in y direction


s = y-yi
The distance from T to the actual line in y direction
t = (yi+1)-y
Now consider the difference between these 2 distance values
s-t
When (s-t) <0 ⟹ s < t
The closest pixel is S
When (s-t) ≥0 ⟹ s < t
The closest pixel is T
This difference is
s-t = (y-yi)-[(yi+1)-y]

= 2y - 2yi -1
Bresenham's Line Algorithm:
Step1: Start Algorithm
Step2: Declare variable x1,x2,y1,y2,d,i1,i2,dx,dy
Step3: Enter value of x1,y1,x2,y2
Where x1,y1are coordinates of starting point
And x2,y2 are coordinates of Ending point
Step4: Calculate dx = x2-x1
Calculate dy = y2-y1
Calculate i1=2*dy

Prepared by Anurag Patel Page 27 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Calculate i2=2*(dy-dx)
Calculate d=i1-dx
Step5: Consider (x, y) as starting point and xendas maximum possible value of x.
If dx < 0
Then x = x2
y = y2
xend=x1
If dx > 0
Then x = x1
y = y1
xend=x2
Step6: Generate point at (x,y)coordinates.
Step7: Check if whole line is generated.
If x > = xend
Stop.
Step8: Calculate co-ordinates of the next pixel
If d < 0
Thend = d + i1
If d ≥ 0
Then d = d + i2
Increment y = y + 1
Step9: Increment x = x + 1
Step10: Draw a point of latest (x, y) coordinates
Step11: Go to step 7
Step12: End of Algorithm

Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find intermediate points.

Solution: x1=1
y1=1
x2=8
y2=5
dx= x2-x1=8-1=7
dy=y2-y1=5-1=4
I1=2* ∆y=2*4=8
I2=2*(∆y-∆x)=2*(4-7)=-6
d = I1-∆x=8-7=1

Prepared by Anurag Patel Page 28 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

x y d=d+I1 or I2

1 1 d+I2=1+(-6)=-5

2 2 d+I1=-5+8=3

3 2 d+I2=3+(-6)=-3

4 3 d+I1=-3+8=5

5 3 d+I2=5+(-6)=-1

6 4 d+I1=-1+8=7

7 4 d+I2=7+(-6)=1

8 5

Advantage:
1. It involves only integer arithmetic, so it is simple.
2. It avoids the generation of duplicate points.
3. It can be implemented using hardware because it does not use multiplicationand
division.
4. It is faster as compared to DDA (Digital Differential Analyzer) because it does not involve
floating point calculations like DDA Algorithm.

Disadvantage:
 This algorithm is meant for basic line drawing only Initializing is not a part of
Bresenham's line algorithm. So to draw smooth lines, you should want to look into a
different algorithm.

Prepared by Anurag Patel Page 29 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

DDA Algorithm Bresenyhham's Line Algorithm

1. DDA Algorithm use floating point, i.e., 1. Bresenham's Line Algorithm use fixed
RealArithmetic. point, i.e., Integer Arithmetic

2. DDA Algorithms uses multiplication &division 2. Bresenham's Line Algorithm uses


its operation onlysubtraction and addition its operation

3. DDA Algorithm is slowly than Bresenham'sLine 3. Bresenham's Algorithm is faster than DDA
Algorithm in line drawing because ituses real Algorithm in line because itinvolves only
arithmetic (Floating Point operation) addition & subtraction in its calculation and
uses only integer arithmetic.

4. DDA Algorithm is not accurate and efficient as 4. Bresenham's Line Algorithm is more
Bresenham's Line Algorithm. accurate and efficient at DDA Algorithm.

5.DDA Algorithm can draw circle and curves but 5. Bresenham's Line Algorithm can draw
are not accurate as Bresenham's Line Algorithm circle and curves with more accurate
than DDA Algorithm.

Polynomial Method:

The first method defines a circle with the second-order polynomial equation as shown in fig:

y2=r2-x2
Where x = the x coordinate
y = the y coordinate
r = the circle radius

With the method, each x coordinate in the sector, from 90° to 45°, is found by stepping x from 0

to & each y coordinate is found by evaluating for each step of x.

Prepared by Anurag Patel Page 30 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Algorithm:
Step1: Set the initial variables
r = circle radius
(h, k) = coordinates of circle center
x=o
I = step size

xend=

Step2: Test to determine whether the entire circle has been scan-converted.
If x >xend then stop.

Step3: Compute y =

Step4: Plot the eight points found by symmetry concerning the center (h, k) at the current (x, y)
coordinates.
Plot (x + h, y +k) Plot (-x + h, -y + k)
Plot (y + h, x + k) Plot (-y + h, -x + k)
Plot (-y + h, x + k) Plot (y + h, -x + k)
Plot (-x + h, y + k) Plot (x + h, -y + k)

Step5: Increment x = x + i

Step6: Go to step (ii).

Circle Generation Algorithm


Drawing a circle on the screen is a little complex than drawing a line. Thereare two
popularalgorithms for generating a circle −Bresenham’s Algorithm and Midpoint Circle
Algorithm. These algorithms are based on the idea of determining the subsequent points
required to draw the circle. Let us discuss the algorithms in detail:

The equation of circle is X2+Y2=r2,X2+Y2=r2, where r is radius.

Prepared by Anurag Patel Page 31 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Bresenham’s Algorithm
We cannot display a continuous arc on the raster display. Instead, we have to choose
the nearest pixel position to complete the arc.

From the following illustration, you can see that we have put the pixel at X,YX,Y location
and now need to decide where to put the next pixel − at N X+1,YX+1,Y or at S X+1,Y−1X+1,Y−1.

This can be decided by the decision parameter d.


 If d <= 0, then NX+1,YX+1,Y is to be chosen as next pixel. 
 If d > 0, then SX+1,Y−1X+1,Y−1 is to be chosen as the next pixel. 

Algorithm:
Step 1 - Get the coordinates of the center of the circle and radius, and store them in x,
yand R respectively. Set P=0 and Q=R.
Step 2 − Set decision parameter D = 3 – 2R.
Step 3 − Repeat through step-8 while P ≤ Q.
Step 4 − Call Draw Circle X,Y,P,QX,Y,P,Q.
Step 5 − Increment the value of P.
Step 6 − If D < 0 then D = D + 4P + 6.
Step 7 − Else Set R = R - 1, D = D + 4P−QP−Q + 10.
Step 8 − Call Draw Circle X,Y,P,QX,Y,P,Q.

Prepared by Anurag Patel Page 32 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Draw Circle Method(X, Y, P, Q).


Call Putpixel (X + P, Y + Q).
Call Putpixel (X - P, Y + Q).
Call Putpixel (X + P, Y - Q).
Call Putpixel (X - P, Y - Q).
Call Putpixel (X + Q, Y + P).
Call Putpixel (X - Q, Y + P).
Call Putpixel (X + Q, Y - P).
Call Putpixel (X - Q, Y - P).

Flood Fill Algorithm


Sometimes we come across an object where we want to fill the area and its boundary
with different colors. We can paint such objects with a specified interior color instead of
searching for particular boundary color as in boundary filling algorithm.

Instead of relying on the boundary of the object, it relies on the fill color. In other words,
it replaces the interior color of the object with the fill color. When no more pixels of the
original interior color exist, the algorithm is completed.

Once again, this algorithm relies on the Four-connect or Eight-connect method of filling
in the pixels. But instead of looking for the boundary color, it is looking for all adjacent pixels
that are a part of the interior.

Boundary Fill Algorithm


The boundary fill algorithm works as its name. This algorithm picks a point inside an
object and starts to fill until it hits the boundary of the object. The color of the boundary and
the color that we fill should be different for this algorithm to work.
In this algorithm, we assume that color of the boundary is same for the entire object. The
boundary fill algorithm can be implemented by 4-connected pixels or 8-connected pixels.

4-Connected Polygon
In this technique 4-connected pixels are used as shown in the figure. We are putting the
pixels above, below, to the right, and to the left side of the current pixels and this process will
continue until we find a boundary with different color.

Prepared by Anurag Patel Page 33 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Algorithm
Step 1 − Initialize the value of seed point seedx,seedyseedx,seedy, fcolor anddcol.
Step 2 − Define the boundary values of the polygon.
Step 3 − Check if the current seed point is of default color, then repeat the steps 4 and
5 till the boundary pixels reached. If get pixel (x,y) = dcol then repeat step 4 and 5
Step 4 − Change the default color with the fill color at the seed point.
SetPixel(seedx, seedy, fcol)
Step 5 − Recursively follow the procedure with four neighbourhood points.
FloodFill (seedx – 1, seedy, fcol, dcol)
FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)

Step 6 − Exit
There is a problem with this technique. Consider the case as shown below where we
tried to fill the entire region. Here, the image is filled only partially. In such cases, 4-connected
pixels technique cannot be used.

8- Connected Polygon
In this technique 8-connected pixels are used as shown in the figure. We are putting
pixels above, below, right and left side of the current pixels as we weredoing in 4-connected
technique.
In addition to this, we are also putting pixels in diagonals so that entire area of the
current pixel is covered. This process will continue until we find a boundary with different color.

Prepared by Anurag Patel Page 34 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Algorithm
Step 1 − Initialize the value of seed point seedx,seedyseedx,seedy, fcolorand dcol.
Step 2 − Define the boundary values of the polygon.
Step 3 − Check if the current seed point is of default color then repeat the steps 4 and 5 till the
boundary pixels reachedIf getpixel(x,y) = dcol then repeat step 4 and 5
Step 4 − Change the default color with the fill color at the seed point.
SetPixel(seedx, seedy, fcol)
Step 5 − Recursively follow the procedure with four neighbourhood points

FloodFill (seedx – 1, seedy, fcol, dcol)


FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx, seedy + 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy - 1, fcol, dcol)

Step 6 − Exit
The 4-connected pixel technique failed to fill the area as marked in the following figure
which won’t happen with the 8-connected technique.

Inside-outside Test
This method is also known as counting number method. While filling an object, we
often need to identify whether particular point is inside the object or outside it. There are two
methods by which we can identify whether particular point is inside an object or outside.
 Odd-Even Rule
 Nonzero winding number rule

Odd-Even Rule
In this technique, we count the edge crossing along the line from any point x,yx,y to
infinity. If the number of interactions is odd then the point x,yx,y is an interior point. If the
number of interactions is even then point x,yx,y is an exterior point. Here is the example to give
you the clear idea −

Prepared by Anurag Patel Page 35 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

From the above figure, we can see that from the point x,yx,y, the number of interactions
point on the left side is 5 and on the right side is 3. So the total number of interaction point is
8, which is odd. Hence, the point is considered within the object.

Winding Number Rule


This method is also used with the simple polygons to test the given point isinterior or
not. It can be simply understood with the help of a pin and a rubber band. Fix up the pin on one
of the edge of the polygon and tie-up the rubber bandin it and then stretch the rubber band
along the edges of the polygon.

When all the edges of the polygon are covered by the rubber band, check out the pin
which has been fixed up at the point to be test. If we find at least one wind at the point
consider it within the polygon, else we can say that the point is not inside the polygon.

In another alternative method, give directions to all the edges of the polygon.Draw a
scan line from the point to be test towards the left most of X direction.
 Give the value 1 to all the edges which are going to upward direction and all other -1 as
direction values.
 Check the edge direction values from which the scan line is passing andsum up them.
 If the total sum of this direction value is non-zero, then this point to be tested is
an interior point, otherwise it is an exterior point.
 In the above figure, we sum up the direction values from which the scan lineis passing
then the total is 1 – 1 + 1 = 1; which is non-zero. So the point is said to be an interior
point.

Prepared by Anurag Patel Page 36 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

UNIT – III
TWO DIMENSIONAL TRANSFORMATIONS

Two Dimensional Geometric Transformations


Changes in orientations, size and shape are accomplished with geometric
transformations that alter the coordinate description of objects.

Basic Transformation
Translation
- T(tx,ty)
- TranslationdistancesScale
- S(sx,sy)
- Scalefactors
Rotation
- R()
- Rotationangle

Translation
A translation is applied to an object by representing it along a straight line path
from one coordinate location to another adding translation distances, tx, ty to original
coordinate position (x,y) to move the point to a new position (x’,y’) tox’ = x + tx, y’ = y + ty

The translation distance point (tx,ty) is called translation vector or shift


vector.Translation equation can be expressed as single matrix equation by using column
vectors to represent the coordinate position and the translation vector as

Moving a polygon from one position to another position with the translation vector
(-5.5, 3.75)

Rotations:
A two-dimensional rotation is applied to an object by repositioning it along a
circular path on xy plane. To generate a rotation, specify a rotation angle θ and the
position (xr, yr) of the rotation point (pivot point) about which the object is to be rotated.

Positive values for the rotation angle define counter clock wise rotation about pivot
point. Negative value of angle rotates objects in clock wise direction. The transformation
can also be described as a rotation about a rotation axis perpendicular to xy plane and
passes through pivot point.

Prepared by Anurag Patel Page 37 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Rotation of a point from position (x, y) to position (x’, y’) through angle θ relative to
coordinate origin

The transformation equations for rotation of a point position P when the pivot point is at
coordinate origin. In figure r is constant distance of the point positions Ф is the original
angular of the point from horizontal and θ is the rotation angle.The transformed
coordinates in terms of angle θ and Ф
x’ = rcos(θ+Ф) = rcosθosФ – rsinθsinФ
y’ = rsin(θ+Ф) = rsinθcosФ + rcosθsinФ

The original coordinates of the point in polar coordinatesx = rcosФ, y = rsinФ


The transformation equation for rotating a point at position (x,y) through anangle θ
about origin
x’ = xcosθ – ysinθ
y’ = xsinθ + ycosθ

Rotation Equation
𝑐𝑜𝑠∅ −𝑠𝑖𝑛∅
Rotation Matrix R= P’ = R
𝑠𝑖𝑛∅ 𝑐𝑜𝑠∅

Note: Positive values for the rotation angle define counterclockwise rotations about the
rotation point and negative values rotate objects in the clockwise.

Scaling
A scaling transformation alters the size of an object. This operation can be carried
out for polygons by multiplying the coordinate values (x, y) to each vertex by scaling factor
Sx&Sy to produce the transformed coordinates (x’, y’) scaling factor Sx scales object in x
direction while Sy scales in y direction. The transformation equation in matrix form

x’=x.Sx y’ =y.Sy

𝑥′ 𝑆𝑥 0 𝑥
𝑦′= 0 𝑆𝑦 𝑦(or)
.

P’ = S. P

Where S is 2 by 2 scaling matrix

Prepared by Anurag Patel Page 38 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and sy = 1.Any
positive numeric values are valid for scaling factors sx and sy. Values less than 1 reduce the
size of the objects and values greater than 1 produce an enlarged object.

There are two types of Scaling. They are


 Uniformscaling
 Non UniformScaling

To get uniform scaling it is necessary to assign same value for sx and sy. Unequal
values for sx and sy result in a non uniform scaling.

Matrix Representation and Homogeneous Coordinates


Many graphics applications involve sequences of geometric transformations. An
animation, for example, might require an object to be translated and rotated at each
increment of the motion. In order to combine sequence of transformations we have to
eliminatethematrixaddition.Toachievethiswehaverepresentmatrixas3X3insteadof2X2introd
ucinganadditionaldummyco-ordinateh.Here points are specified by three numbers instead
of two. This coordinate system is called as Homogeneous coordinate system and it allows
expressing transformation equation as matrixmultiplication.Cartesian coordinate position
(x, y) is represented as homogeneous coordinate triple(x, y, h)
- Represent coordinates as (x, y,h)
- Actual coordinates drawn will be (x/h,y/h)

For Translation

For Scaling

Prepared by Anurag Patel Page 39 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

For Rotation
Composite Transformations

Acompositetransformationisasequenceoftransformations;onefollowedbytheother.
Wecansetupamatrixforanysequenceof transformations as a composite transformation
matrix by calculating the matrix product of the individual transformations

Translation
Iftwosuccessivetranslationvectors(tx1,ty1)and(tx2,ty2)areappliedtoacoordinateposit
ionP,thefinaltransformedlocationP’is calculatedas

P’ = T(tx2, ty2). {T(tx1, ty1).P}


= {T(tx2, ty2).T(tx1,ty1)}.P

Where P and P’ are represented as homogeneous-coordinate column vectors.

Which demonstrated the two successive translations are additive.

Rotations
Two successive rotations applied to point P produce the transformed position

P’ = R(θ2).{R(θ1).P} = {R(θ2).R(θ1)}.P

By multiplying the two rotation matrices, we can verify that two successive rotation are
additive
R(θ2).R(θ1) = R(θ1 + θ2)

So that the final rotated coordinates can be calculated with the composite rotation matrix
asP’ = R(θ1 + θ2).P

Prepared by Anurag Patel Page 40 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Scaling
Concatenating transformation matrices for two successive scaling operations produces the
following composite scaling matrix

General Pivot-Point Rotation


1. Translate the object so that pivot-position is moved to the co-ordinateorigin
2. Rotate the object about the co-ordinateorigin

Translate the object so that the pivot point is returned to its original position

The composite transformation matrix for this sequence is obtain with the concatenation
Which can also be expressed as T(xr, yr).R(θ).T(-xr, -yr) = R(xr, yr, θ)

General fixed point Scaling


1. Translate object so that the fixed point coincides with the co-ordinateorigin
2. Scale the object with respect to the co-ordinateorigin

Use the inverse translation of step 1 to return the object to its original position

Prepared by Anurag Patel Page 41 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Concatenating the matrices for these three operations produces the required
scaling matrixCan also be expressed as T(xf, yf).S(sx, sy).T(-xf, -yf) = S(xf, yf, sx, sy)

Note: Transformations can be combined by matrix multiplication

Other Transformations
1. Reflection
2. Shear

Reflection:
A reflection is a transformation that produces a mirror image of an object. The
mirror image for a two-dimensional reflection is generated relative to an axis of reflection
by We can choose an axis of reflection in the xy plane or perpendicular to the xy plane or
coordinate origin.

Reflection of an object about the x axis


Reflectionthexaxisisaccomplishedwiththetransformationmatrix
1 0 0
0 −1 0
0 0 1

Prepared by Anurag Patel Page 42 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Reflection of an object about the y axis


Reflectiontheyaxisisaccomplishedwiththetransformationmatrix

Reflection of an object about the coordinate origin


Reflection about origin is accomplished with the transformation matrix

Prepared by Anurag Patel Page 43 of 82


STUDY MATERIAL FOR MDU
B.C.A Vth SEM.
COMPUTER GRAPHICS

Reflection axis as the diagonal line y = x


To obtain transformation matrix for reflection about diagonal y=x the
transformation sequence is

1. Clock wise rotation by45


2. Reflection about xaxis
3. Counter clock wise by45

Reflection about the diagonal line y = x is accomplished with the transformation matrix
Reflection axis as the diagonal line y = -x

Prepared by Anurag Patel Page 44 of 82


STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

To obtain transformation matrix for reflection about diagonal y = -x the transformation


sequence is
1. Clock wise rotation by 45
2. Reflection about x axis
3. Counter clock wise by 45

Reflection about the diagonal line y = -x is accomplished with the transformation matrix
Shear
A Transformation that slants the shape of an object is called the shear
transformation.Two common shearing transformations
areused.Oneshiftsxcoordinatevaluesandothershiftyco-
ordinatevalues.Howeverinboththecasesonlyoneco-ordinate(xory)changes its coordinates
and other preserves itsvalues.

X - Shear
The x shear preserves the y coordinates, but changes the x values which cause
vertical lines to tilt right or left as shown in figure
The Transformations matrix for x-shear is

which transforms the coordinates asx’ =x+ shx .y ;y’ =y


Y - Shear
The y shear preserves the x coordinates, but changes the y values which cause horizontal
lines which slope up or down The Transformations matrix for y-shear is

Page 45 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

which transforms the coordinates asx’ = x


y’ = y + y shx .x
XY - Shear
The transformation matrix for xy-shear

which transforms the coordinates as


x’ = x +xshx.y
y’ = y +yshx

Shearing Relative to other reference line


We can apply x shear and y shear transformations relative to other reference lines.
In x shear transformations we can use y reference line and in y shear we can use x
reference line.

X - Shear with y reference line


We can generate x-direction shears relative to other reference lines with the
transformation matrix

which transforms the coordinates as


x’ = x+xshx (yref y)
y’ = y

Example Shx = ½ and Y ref = -1

Y - Shear with x reference line


We can generate y-direction shears relative to other reference lines withthe
transformation matrixwhich transforms the coordinates as

Page 46 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Example x’ = x
Y’ = shy (x – x ref) + y
Shy = ½ and x ref = -1

3D Transformation:
In Computer graphics, Transformation is a process of modifying and re-positioning the
existing graphics.

 3D Transformations take place in a three dimensional plane.


 3D Transformations are important and a bit more complex than 2D Transformations.
 Transformations are helpful in changing the position, size, orientation, shape etc of the
object.

Transformation Techniques:
In computer graphics, various transformation techniques are

1. Translation
2. Rotation
3. Scaling
4. Reflection
5. Shear

Page 47 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

In this article, we will discuss about 3D Translation in Computer Graphics.


3D Translation in Computer Graphics:
In Computer graphics, 3D Translation is a process of moving an object from one
position to another in a three dimensional plane.

Consider a point object O has to be moved from one position to another in a 3D plane.
Let
 Initial coordinates of the object O = (Xold, Yold, Zold)
 New coordinates of the object O after translation = (Xnew, Ynew , Zold)
 Translation vector or Shift vector = (Tx, Ty, Tz)

Given a Translation vector (Tx, Ty, Tz)-


 Tx defines the distance the Xold coordinate has to be moved.
 Ty defines the distance the Yold coordinate has to be moved.
 Tz defines the distance the Zold coordinate has to be moved.

This translation is achieved by adding the translation coordinates to the old coordinates of the
object as-
 Xnew = Xold + Tx (This denotes translation towards X axis)
 Ynew = Yold + Ty (This denotes translation towards Y axis)
 Znew = Zold + Tz (This denotes translation towards Z axis)

In Matrix form, the above translation equations may be represented as-

Page 48 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

3D Rotation in Computer Graphics:


In Computer graphics, 3D Rotation is a process of rotating an object with respect to an
angle in a three dimensional plane.
Consider a point object O has to be rotated from one angle to another in a 3D plane.

Let-
 Initial coordinates of the object O = (Xold, Yold, Zold)
 Initial angle of the object O with respect to origin = Φ
 Rotation angle = θ
 New co-ordinates of the object O after rotation = (Xnew, Ynew, Znew)

In 3 dimensions, there are 3 possible types of rotation:


 X-axis Rotation
 Y-axis Rotation
 Z-axis Rotation

For X-Axis Rotation:


This rotation is achieved by using the following rotation equations-
 Xnew = Xold
 Ynew = Yold x cosθ – Zold x sinθ
 Znew = Yold x sinθ + Zold x cosθ

In Matrix form, the above rotation equations may be represented as,

For Y-Axis Rotation:


This rotation is achieved by using the following rotation equations-
 Xnew = Zold x sinθ + Xold x cosθ 

Page 49 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

 Ynew = Yold
 Znew = Yold x cosθ – Xold x sinθ

In Matrix form, the above rotation equations may be represented as-

For Z-Axis Rotation:


This rotation is achieved by using the following rotation equations-
 Xnew = Xold x cosθ – Yold x sinθ
 Ynew = Xold x sinθ + Yold x cosθ
 Znew = Zold

In Matrix form, the above rotation equations may be represented as-

3D Scaling in Computer Graphics:


In computer graphics, scaling is a process of modifying or altering the size of objects.
 Scaling may be used to increase or reduce the size of object.
 Scaling subjects the coordinate points of the original object to change.
 Scaling factor determines whether the object size is to be increased or reduced.
 If scaling factor > 1, then the object size is increased.
 If scaling factor < 1, then the object size is reduced.

Consider a point object O has to be scaled in a 3D plane. Let:


 Initial coordinates of the object O = (Xold, Yold,Zold)
 Scaling factor for X-axis = Sx
 Scaling factor for Y-axis = Sy
 Scaling factor for Z-axis = Sz

Page 50 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

 New coordinates of the object O after scaling = (Xnew, Ynew, Znew)

This scaling is achieved by using the following scaling equations-


 Xnew = Xold x Sx
 Ynew = Yold x Sy
 Znew = Zold x Sz

In Matrix form, the above scaling equations may be represented as-

3D Reflection in Computer Graphics-


 Reflection is a kind of rotation where the angle of rotation is 180 degree.
 The reflected object is always formed on the other side of mirror.
 The size of reflected object is same as the size of original object.

Consider a point object O has to be reflected in a 3D plane.


Let,
 Initial coordinates of the object O = (Xold, Yold, Zold)
 New coordinates of the reflected object O after reflection = (Xnew, Ynew,Znew)

In 3 dimensions, there are 3 possible types of reflection:

 Reflection relative to XY plane


 Reflection relative to YZ plane
 Reflection relative to XZ plane

Reflection Relative to XY Plane:


This reflection is achieved by using the following reflection equations:
 Xnew = Xold
 Ynew = Yold
 Znew = -Zold

Page 51 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

In Matrix form, the above reflection equations may be represented as-

Reflection Relative to YZ Plane:


This reflection is achieved by using the following reflection equations-
 Xnew = -Xold
 Ynew = Yold
 Znew = Zold

In Matrix form, the above reflection equations may be represented as-

Reflection Relative to XZ Plane:


This reflection is achieved by using the following reflection equations-
 Xnew = Xold
 Ynew = -Yold
 Znew = Zold

In Matrix form, the above reflection equations may be represented as:

Page 52 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

3D Shearing in Computer Graphics:


In Computer graphics,
3D Shearing is an ideal technique to change the shape of an existing object in a three
dimensional plane

In a three dimensional plane, the object size can be changed along X direction, Y
direction as well as Z direction.So, there are three versions of shearing:

1. Shearing in X direction
2. Shearing in Y direction
3. Shearing in Z direction

Consider a point object O has to be sheared in a 3D plane. Let:


 Initial coordinates of the object O = (Xold, Yold, Zold)
 Shearing parameter towards X direction = Shx
 Shearing parameter towards Y direction = Shy
 Shearing parameter towards Z direction = Shz
 New coordinates of the object O after shearing = (Xnew, Ynew, Znew)

Shearing in X Axis-
Shearing in X axis is achieved by using the following shearing equations-
 Xnew = Xold
 Ynew = Yold + Shy x Xold
 Znew = Zold + Shz x Xold

In Matrix form, the above shearing equations may be represented as:

Page 53 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Shearing in Y Axis-
Shearing in Y axis is achieved by using the following shearing equations:
 Xnew = Xold + Shx x Yold
 Ynew = Yold
 Znew = Zold + Shz x Yold

In Matrix form, the above shearing equations may be represented as-

Shearing in Z Axis-
Shearing in Z axis is achieved by using the following shearing equations-
 Xnew = Xold + Shx x Zold
 Ynew = Yold + Shy x Zold
 Znew = Zold

In Matrix form, the above shearing equations may be represented as-

Page 54 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Page 55 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

UNIT – IV
CLIPPING
Viewing and clipping
The process of selecting and viewing the picture with different views is called windowing
and a process which divides each element of the picture into its visible and invisible portions,
allowing the invisible portion to be discarded is calledclipping.

Windows and viewports


A world coordinate area selected for display is called a window. An area on a display
device to which a window is mapped is called a view port. The window defines what is to be
viewed the view port defines where it is to be displayed.

The mapping of a part of a world coordinate scene to device coordinate is referred to as


viewing transformation. The two dimensional viewing transformation is referred to as window
to view port transformation of windowing transformation.

Viewing Transformation
Once object description has been transmitted to the viewing reference frame, we
choose the window extends in viewing coordinates and selects the viewport limits in
normalized coordinates.

Object descriptions are then transferred to normalized device coordinates:


We do this thing using a transformation that maintains the same relative placement of
an object in normalized space as they had in viewing coordinates.

If a coordinate position is at the center of the viewing window:


It will display at the center of the viewport.Fig shows the window to viewport mapping.
A point at position (xw, yw) in window mapped into position (xv,yv) in the associated viewport.

Page 56 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

In order to maintain the same relative placement of the point in the viewport as in the window,
we require:

Solving these impressions for the viewport position (xv, yv), we have
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy .......... equation 2
Where scaling factors are

Equation (1) and Equation (2) can also be derived with a set of transformation that converts the
window or world coordinate area into the viewport or screen coordinate area. This
conversation is performed with the following sequence of transformations:
1. Perform a scaling transformation using a fixed point position (xwmin,ywmin) that scales
the window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport. Relative proportions
of objects are maintained if the scaling factors are the same (sx=sy).

From normalized coordinates, object descriptions are mapped to the various display
devices.Any number of output devices can we open in a particular app, and three windows to
viewport transformation can be performed for each open output device.This mapping called
workstation transformation (It is accomplished by selecting a window area in normalized space
and a viewport area in the coordinates of the display device).As in fig, workstation
transformation to partition a view so that different parts of normalized space can be displayed
on various output devices).

Page 57 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Matrix Representation of the above three steps of Transformation:

Step1:Translate window to origin 1


Tx=-Xwmin Ty=-Ywmin

Step2:Scaling of the window to match its size to the viewport


Sx=(Xymax-Xvmin)/(Xwmax-Xwmin)
Sy=(Yvmax-Yvmin)/(Ywmax-Ywmin)

Step3:Again translate viewport to its correct position on screen.


Tx=Xvmin
Ty=Yvmin
Above three steps can be represented in matrix form:
VT=T * S * T1
T = Translate window to the origin
S=Scaling of the window to viewport size
T1=Translating viewport on screen.

Viewing Transformation= T * S * T1

Page 58 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Advantage of Viewing Transformation:


We can display picture at device or display system according to our need and choice.
Note:
 World coordinate system is selected suits according to the application program. 
 Screen coordinate system is chosen according to the need of design.
 Viewing transformation is selected as a bridge between the world and screen
coordinate

Line Clipping:
It is performed by using the line clipping algorithm.

The line clipping algorithms are:


1. Cohen Sutherland Line Clipping Algorithm
2. Midpoint Subdivision Line Clipping Algorithm
3. Liang-Barsky Line Clipping Algorithm

Cohen Sutherland Line Clipping Algorithm:


In the algorithm, first of all, it is detected whether line lies inside the screen or it is
outside the screen.

All lines come under any one of the following categories:


1. Visible
2. Not Visible
3. Clipping Case

1. Visible:
If a line lies within the window, i.e., both endpoints of the line lies within the window. A
line is visible and will be displayed as it is.

2. Not Visible:
If a line lies outside the window it will be invisible and rejected. Such lines will not
display. If any one of the following inequalities is satisfied, then the line is considered invisible.
Let A (x1,y2) and B (x2,y2) are endpoints of line.

xmin,xmax are coordinates of the window.


ymin,ymax are also coordinates of the window.
x1>xmax
x2>xmax
y1>ymax
y2>ymax
x1<xmin
x2<xmin
y1<ymin
y2<ymin

3. Clipping Case:

Page 59 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

If the line is neither visible case nor invisible case. It is considered to be clipped case.
First of all, the category of a line is found based on nine regions given below. All nine regions
are assigned codes. Each code is of 4 bits. If both endpoints of the line have end bits zero, then
the line is considered to be visible.

The centre area is having the code, 0000, i.e., region 5 is considered a rectangle window.

Following figure show lines of various types

 Line AB is the visible case


 Line OP is an invisible case
 Line PQ is an invisible line
 Line IJ are clipping candidates

Page 60 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

 Line MN are clipping candidate


 Line CD are clipping candidate

Advantage of Cohen Sutherland Line Clipping:


1. It calculates end-points very quickly and rejects and accepts lines quickly.
2. It can clip pictures much large than screen size.

Algorithm of Cohen Sutherland Line Clipping:


Step1:Calculate positions of both endpoints of the line

Step2:Perform OR operation on both of these end-points

Step3:If the OR operation gives 0000


Thenline is considered to be visible
Else
Perform AND operation on both endpoint
If And ≠ 0000
Then the line is invisible
Else
And=0000
Line is considered the clipped case.

Step4:If a line is clipped case, find an intersection with boundaries of the window
m=(y2-y1 )(x2-x1)

a) If bit 1 is "1" line intersects with left boundary of rectangle window


y3=y1+m(x-X1)
where X = Xwmin
where Xwminis the minimum value of X co-ordinate of window
b) If bit 2 is "1" line intersect with right boundary
y3=y1+m(X-X1)
where X = Xwmax
where X more is maximum value of X co-ordinate of the window
c) If bit 3 is "1" line intersects with bottom boundary
X3=X1+(y-y1)/m
where y = ywmin
ywmin is the minimum value of Y co-ordinate of the window
d) If bit 4 is "1" line intersects with the top boundary
X3=X1+(y-y1)/m
where y = ywmax
ywmax is the maximum value of Y co-ordinate of the window

Mid Point Subdivision Line Clipping Algorithm:

Page 61 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

It is used for clipping line. The line is divided in two parts. Mid points of line is obtained
by dividing it in two short segments. Again division is done, by finding midpoint. This process is
continued until line of visible and invisible category is obtained. Let (xi,yi) are midpoint.

x5lie on point of intersection of boundary of window.

Advantage of midpoint subdivision Line Clipping:


It is suitable for machines in which multiplication and division operation is not possible.
Because it can be performed by introducing clipping divides in hardware.

Algorithm of midpoint subdivision Line Clipping:


Step1: Calculate the position of both endpoints of the line
Step2: Perform OR operation on both of these endpoints
Step3: If the OR operation gives 0000
then
Line is guaranteed to be visible
else
Perform AND operation on both endpoints.
If AND ≠ 0000
then the line is invisible
else
AND=6000
then the line is clipped case.

Step4: For the line to be clipped. Find midpoint


Xm=(x1+x2)/2
Ym=(y1+y2)/2
Xmis midpoint of X coordinate.
Ymis midpoint of Y coordinate.

Step5: Check each midpoint, whether it nearest to the boundary of a window or not.

Page 62 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Step6: If the line is totally visible or totally rejected not found then repeat step 1 to 5.

Step7: Stop algorithm.

Example: Window size is (-3, 1) to (2, 6). A line AB is given having co-ordinates of A (-4, 2) and B
(-1, 7). Does this line visible. Find the visible portion of the line using midpoint subdivision?

Solution:

Step1: Fix point A (-4, 2)

Step2: Find b"=mid of b'and b

So (-1, 5) is better than (2, 4)

Find b"&bb"(-1, 5) b (-1, 7)

Page 63 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

So B""to B length of line will be clipped from upper side

Now considered left-hand side portion

A and B""are now endpoints

Find mid of A and B""

A (-4, 2) B ""(-1, 6)

Page 64 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Sutherland-Hodgeman Polygon Clipping Algorithm:-


A polygon can be clipped by processing its boundary as a whole against each window
edge. This is achieved by processing all polygon vertices against each clip rectangle boundary in
turn. beginning with the original set of polygon vertices, we could first clip the polygon against
the left rectangle boundary to produce a new sequence of vertices. The new set of vertices
could then be successively passed to a right boundary clipper, a top boundary clipper and a
bottom boundary clipper, as shown in figure (l). At each step a new set of polygon vertices is
generated and passed to the next window boundary clipper. This is the fundamental idea used
in the Sutherland - Hodgeman algorithm.

The output of the algorithm is a list of polygon vertices all of which are on the visible
side of a clipping plane. Such each edge of the polygon is individually compared with the
clipping plane. This is achieved by processing two vertices of each edge of the polygon around
the clipping boundary or plane. This results in four possible relationships between the
edge and the clipping boundary or Plane. (See Fig. m).

Page 65 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

1. If the first vertex of the edge is outside the window boundary and the second vertex of
the edge is inside then the intersection point of the polygon edge with the window
boundary and the second vertex are added to the output vertex list (See Fig. m (a)).
2. If both vertices of the edge are inside the window boundary, only the second vertex is
added to the output vertex list. (See Fig. m (b)).
3. If the first vertex of the edge is inside the window boundary and the second vertex of
the edge is outside, only the edge intersection with the window boundary is added to
the output vertex list. (See Fig. m (c)).
4. If both vertices of the edge are outside the window boundary, nothing is added to the
output list. (See Fig. m (d)).

Once all vertices are processed for one clip window boundary, the output list of vertices
is clipped against the next window boundary. Going through above four cases we can realize
that there are two key processes in this algorithm.

1. Determining the visibility of a point or vertex (lnside - Outside test) and


2. Determining the intersection of the polygon edge and the clipping plane.

Page 66 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

One way of determining the visibility of a point or vertex is described here. Consider
that two points A and B define the window boundary and point under consideration is V, then
these three points define a plane. Two vectors which lie in that plane are AB and AV. If this
plane is considered in the xy plane, then the vector cross product AV x AB has only a
component given by

The sign of the z component decides the position of Point V with respect to window
boundary.
If z is:
Positive - Point is on the right side of the window boundary.
Zero - Point is on the window boundary.
Negative - Point is on the left side of the window boundary.

Page 67 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Sutherland-Hodgeman Polygon Clipping Algorithm:-

1. Read coordinates of all vertices of the Polygon.


2. Read coordinates of the dipping window
3. Consider the left edge of the window
4. Compare the vertices of each edge of the polygon, individually with the clipping plane.
5. Save the resulting intersections and vertices in the new list of vertices according to four
possible relationships between the edge and the clipping boundary.
6. Repeat the steps 4 and 5 for remaining edges or the clipping window. Each time the
resultant list of vertices is successively passed to process the next edge of the clipping
window.
7. Stop.

Example:
For a polygon and clipping window shown in figure below give the list of vertices after each
boundary clipping.

Solution:
Original polygon vertices are V1, V2, V3, V4, and V5. After clipping each boundary the
new vertices are as shown in figure above.

Hidden Surface Elimination


1. One of the most challenging problems in computer graphics is the removal of hidden
parts from images of solid objects.
2. In real life, the opaque material of these objects obstructs the light rays from hidden
parts and prevents us from seeing them.
3. In the computer generation, no such automatic elimination takes place when objects
are projected onto the screen coordinate system.
4. Instead, all parts of every object, including many parts that should be invisible are
displayed.

Page 68 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

5. To remove these parts to create a more realistic image, we must apply a hidden line or
hidden surface algorithm to set of objects.
6. The algorithm operates on different kinds of scene models, generate various forms of
output or cater to images of different complexities.
7. All use some form of geometric sorting to distinguish visible parts of objects from those
that are hidden.
8. Just as alphabetical sorting is used to differentiate words near the beginning of the
alphabet from those near the ends.
9. Geometric sorting locates objects that lie near the observer and are therefore visible.
10. Hidden line and Hidden surface algorithms capitalize on various forms of coherence to
reduce the computing required to generate an image.
11. Different types of coherence are related to different forms of order or regularity in the
image.
12. Scan line coherence arises because the display of a scan line in a raster image is usually
very similar to the display of the preceding scan line.
13. Frame coherence in a sequence of images designed to show motion recognizes that
successive frames are very similar.
14. Object coherence results from relationships between different objects or between
separate parts of the same objects.
15. A hidden surface algorithm is generally designed to exploit one or more of these
coherence properties to increase efficiency.
16. Hidden surface algorithm bears a strong resemblance to two-dimensional scan
conversions.

Types of hidden surface detection algorithms:


1. Object space methods
2. Image space methods

Object space methods:


In this method, various parts of objects are compared. After comparison visible, invisible
or hardly visible surface is determined. These methods generally decide visible surface. In the
wireframe model, these are used to determine a visible line. So these algorithms are line based
instead of surface based. Method proceeds by determination of parts of an object whose view
is obstructed by other object and draws these parts in the same color.

Image space methods:


Here positions of various pixels are determined. It is used to locate the visible surface
instead of a visible line. Each point is detected for its visibility. If a point is visible, then the pixel
is on, otherwise off. So the object close to the viewer that is pierced by a projector through a
pixel is determined. That pixel is drawn is appropriate color.

These methods are also called a Visible Surface Determination. The implementation of
these methods on a computer requires a lot of processing time and processing power of the
computer.

The image space method requires more computations. Each object is defined clearly.
Visibility of each object surface is also determined.

Page 69 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Differentiate between Object space and Image space method:

Object Space Image Space

1.Image space is object based. Itconcentrates 1.It is a pixel-based method. It is concerned


on geometrical relation among objects in the with the final image, what is visible within
scene. each raster pixel.

2.Here surface visibility is determined. 2.Here line visibility or point visibility is


determined.

3.It is performed at the precision with which 3.It is performed using the resolution of the
each object is defined, No resolution is display device.
considered.

4.Calculations are not based on the resolution of 4.Calculations are resolution base, so the
the display so change of object can be easily change is difficult to adjust.
adjusted.

5.These were developed for vector graphics 5.These are developed for raster devices.
system.

6.Object-based algorithms operate on 6. These operate on object data.


continuous object data.

7.Vector display is used for object method has 7.Raster systems is used for image space
large address space. methods have limited address space.

8.Object precision is used for application where 8.There are suitable for application
speed is required. where accuracy is required.

9.It requires a lot of calculations if theimage is 9. Image can be enlarged without losing
to enlarge. accuracy.

10.If the number of objects in the scene 10.In this method, complexity increase
increases, computation time also will increase. with the complexity of visible parts.

Similarity of object and Image space method


In both method sorting is used a depth comparison of individual lines, surfaces are
objected to their distances from the view plane.

Page 70 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Considerations for selecting or designing hidden surface algorithms:


Following three considerations are taken:
1. Sorting
2. Coherence
3. Machine

1. Sorting:
All surfaces are sorted in two classes, i.e., visible and invisible. Pixels are colored
accordingly.

Several sorting algorithms are available i.e.


1. Bubble sort
2. Shell sort
3. Quick sort
4. Tree sort
5. Radix sort

Different sorting algorithms are applied to different hidden surface algorithms. Sorting
of objects is done using x and y, z co-ordinates. Mostly z coordinate is used for sorting. The
efficiency of sorting algorithm affects the hidden surface removal algorithm. For sorting
complex scenes or hundreds of polygons complex sorts are used, i.e., quick sort, tree sort, radix
sort.
For simple objects selection, insertion, bubble sort is used.

2. Coherence
It is used to take advantage of the constant value of the surface of the scene. It is based
on how much regularity exists in the scene. When we moved from one polygon of one object to
another polygon of same object color and shearing will remain unchanged.

Types of Coherence
a) Edge coherence
b) Object coherence
c) Face coherence
d) Area coherence
e) Depth coherence

Page 71 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

f) Scan line coherence


g) Frame coherence
h) Implied edge coherence

a) Edge coherence:
The visibility of edge changes when it crosses another edge or it also penetrates a visible
edge.

b) Object coherence:
Each object is considered separate from others. In object, coherence comparison is
done using an object instead of edge or vertex. If A object is farther from object B, then there is
no need to compare edges and faces.

c) Face coherence:
In this faces or polygons which are generally small compared with the size of the image.

d) Area coherence:
It is used to group of pixels cover by same visible face.

e) Depth coherence:
Location of various polygons has separated a basis of depth. Depth of surface at one
point is calculated, the depth of points on rest of the surface can often be determined by a
simple difference equation.

f) Scan line coherence:


The object is scanned using one scan line then using the second scan line. The intercept
of the first line.

g) Frame coherence:
It is used for animated objects. It is used when there is little change in image from one
frame to another.

h) Implied edge coherence:


If a face penetrates in another, line of intersection can be determined from two points
of intersection.

Algorithms used for hidden line surface detection


i. Back Face Removal Algorithm
ii. Z-Buffer Algorithm

i. Back Face Removal Algorithm


It is used to plot only surfaces which will face the camera. The objects on the back side
are not visible. This method will remove 50% of polygons from the scene if the parallel
projection is used. If the perspective projection is used then more than 50% of the invisible area
will be removed. The object is nearer to the center of projection, number of polygons from the
back will be removed.
It applies to individual objects. It does not consider the interaction between various
objects. Many polygons are obscured by front faces, although they are closer to the viewer, so
for removing such faces back face removal algorithm is used.

Page 72 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

When the projection is taken, any projector ray from the center of projection through
viewing screen to object pieces object at two points, one is visible front surfaces, and another is
not visible back surface.

This algorithm acts a preprocessing step for another algorithm. The back face algorithm
can be represented geometrically. Each polygon has several vertices. All vertices are numbered
in clockwise. The normal M1 is generated a cross product of any two successive edge vectors.
M1represent vector perpendicular to face and point outward from polyhedron surface
N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible
Advantage
1. It is a simple and straight forward method.
2. It reduces the size of databases, because no need of store all surfaces in the database,
only the visible surface is stored.

Repeat for all polygons in the scene.


1. Do numbering of all polygons in clockwise direction i.e.
v1 v2 v3 ....... vz
2. Calculate normal vector i.e. N1
N1=(v2-v1 )*(v3-v2)
3. Consider projector P, it is projection from any vertex
Calculate dot product
Dot=N.P
4. Test and plot whether the surface is visible or not.
If Dot ≥ 0 then
surface is visible
else
Not visible
ii. Z-Buffer Algorithm
It is also called a Depth Buffer Algorithm. Depth buffer algorithm is simplest image
space algorithm. For each pixel on the display screen, we keep a record of the depth of an
object within the pixel that lies closest to the observer. In addition to depth, we also record the
intensity that should be displayed to show the object. Depth buffer is an extension of the frame
buffer. Depth buffer algorithm requires 2 arrays, intensity and depth each of which is indexed
by pixel coordinates (x, y).

Algorithm
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background
value.
For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a
polygon when projected onto the screen. For each of these pixels:
1. Calculate the depth z of the polygon at (x, y)
2. If z < depth [x, y], this polygon is closer to the observer than others already
recorded for this pixel. In this case, set depth [x, y] to z and intensity [x, y] to a
value corresponding to polygon's shading. If instead z > depth [x, y], the polygon
already recorded at (x, y) lies closer to the observer than does this new polygon,
and no action is taken.

Page 73 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

3. After all, polygons have been processed; the intensity array will contain the
solution.
4. The depth buffer algorithm illustrates several features common to all hidden
surface algorithms.

5. First, it requires a representation of all opaque surface in scene polygon in this


case.
6. These polygons may be faces of polyhedral recorded in the model of scene or
may simply represent thin opaque 'sheets' in the scene.
7. The IInd important feature of the algorithm is its use of a screen coordinate
system. Before step 1, all polygons in the scene are transformed into a screen
coordinate system using matrix multiplication.

Limitations of Depth Buffer


1. The depth buffer Algorithm is not always practical because of the enormous size of
depth and intensity arrays.
2. Generating an image with a raster of 500 x 500 pixels requires 2, 50,000 storage
locations for each array.
3. Even though the frame buffer may provide memory for intensity array, the depth array
remains large.
4. To reduce the amount of storage required, the image can be divided into many smaller
images, and the depth buffer algorithm is applied to each in turn.
5. For example, the original 500 x 500 faster can be divided into 100 rasters each 50 x 50
pixels.
6. Processing each small raster requires array of only 2500 elements, but execution time
grows because each polygon is processed many times.
7. Subdivision of the screen does not always increase execution time instead it can help
reduce the work required to generate the image. This reduction arises because of
coherence between small regions of the screen.

Page 74 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

UNIT – V
MULTIMEDIA
Multimedia is simply multiple forms of media integrated together. An example
of multimedia is a web page with an animation. Besides multiple types of media being
integrated with one another, multimedia can also stand for interactive types of media such
as video games, CD ROMs that teach a foreign language, or an information Kiosk at a subway
terminal. Other terms that are sometimes used for multimedia include hypermedia and rich
media. [1]

There are number of data types that can be characterized as multimedia data types.
These are typically the elements or the building blocks of or generalized multimedia
environments, platforms, or integrating tools. The basic types can be described as follows:
Text, Graphics , Audio, Animation, Video, Graphic Objects

Multimedia has become a huge force in human-being culture, industry and education.
Practically any type of information we receive can be categorized as multimedia, from
television, to magazines, to web pages, to movies, multimedia is a tremendous force in both
informing the public and entertaining us. Advertising is perhaps one of the biggest industry's
that use multimedia to send their message to the masses. Multimedia in Education has been
extremely effective in teaching individuals a wide range of subjects. The human brain learns
using many senses such as sight and hearing. While a lecture can be extremely informative, a
lecture that integrates pictures or video images can help an individual learn and retain
information much more effectively.

As technology progresses, so will multimedia. Today, there are plenty of new media
technologies being used to create the complete multimedia experience. For instance, virtual
reality integrates the sense of touch with video and audio media to immerse an individual into
a virtual world. Other media technologies being developed include the sense of smell that can
be transmitted via the Internet from one individual to another. Today's video games include bio
feedback.

Text :
The form in which the text can be stored can vary greatly. In addition to ASCII based
files, text is typically stored in processor files, spreadsheets, databases and more general
multimedia objects. With availability and proliferation of GUIs, text fonts the job of storing text
is becoming complex allowing special effects (color, shades..).

Graphics
There is great variance in the quality and size of storage (Image file formats) for still
images (Bitmap - gif, jpg, bmp) (Vector - svg, pdf, swf, ps). Digitalized images are sequence of
pixels that represents a region in the user's graphical display.

Audio
An increasingly popular data type (audio file format) being integrated in most of
applications is Audio. Its quite space intensive. One minute of sound can take up to 2-3 Mbs of
space. Several techniques are used to compress it in suitable format.

Animation

Page 75 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

It involves the appearance of motion caused by displaying still images one after another.
Often, animation is used for entertainment purposes. In addition to its use for entertainment,
animation is considered a form of art. It is often displayed and celebrated in film festivals
throughout the world. Also used for educational purposes.

Video
One on the most space consuming multimedia data type is digitalized video. The
digitalized videos are stored as sequence of frames. Depending upon its resolution and size a
single frame can consume upto 1 MB. Also to have realistic video playback, the transmission,
compression, and de compression of digitalized require continuous transfer rate.

Graphics (Objects)
These consist of special data structures used to define 2D & 3D shapes through which
we can define multimedia objects. These include various formats used by image, video editing
applications.

What does Data stream mean?


A data stream is defined in IT as a set of digital signals used for different kinds of content
transmission. Data streams work in many different ways across many modern technologies,
with industry standards to support broad global networks and individual access.

Many data streams are controlled using a packet-based system. The common 3G and 4G
wireless platforms, as well as Internet transmissions, are composed of these sets of data
packets that are handled in specific ways. For example, packets typically include headers that
identify the origin or intended recipient, along with other information that can make data
stream handling more effective.

Multimedia applications
Electronic messaging:
Sending audio and video as attachments via email. Downloading audio and video.
Sending simple text data through mails. It also provides store and forward message facility.

Image Enhancement:
Highlighting details of image by increasing contrast. Making picture darker and
increasing grey scale level of pixels. Rotating image in real time. Adjusting RGB to get image
with proper colors.

Document Imaging:
Storing, retrieving and manipulating large volumes of data i.e. documents. Complex
documents can be send in electronics form rather than on paper. Document image systems
uses workflow method.

Multimedia in Education field:


Multimedia is used to instruct as a master (guide) because nowadays, multimedia CD
are used instead of text books. Knowledge can be easily obtained by using multimedia CD in
computer because multimedia CD includes text, pictures, sound and film which helps the
students to understand more easily and clearly than the text books. For the use of multimedia
as an education help the PC contains a high quality display. This all has promoted the
development of a wide range of computer based training .

Page 76 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Multimedia in Entertainment:
Now a days the live internet pay to play gaming with multiple players has become
popular. Actually the first application of multimedia system was in the field of entertainment
and that too in the video game industry. The integrated audio and video effects make various
types of games more entertaining. Generally most of the video games need joystick play.
Multimedia is mostly used in games. Text, audio, images and animations are mostly used in
computer games. The use of multimedia in games made possible to make innovative and
interactive games. It is also used in movies for entertainment, especially to develop special
effects in movies and animations. Multimedia application that allows users to actively
participate is called Interactive multimedia.

Multimedia in Advertising:
Multimedia technology is commonly used in advertisement. To promote the business
and products multimedia is used for preparing advertisement.

Multimedia in Business:
The business application of multimedia includes, product demos, instant messaging.
Multimedia is used in business for training employees using projectors, presenting sales,
educating customers etc. It helps for the promotion of business and new products. One the
excellent applications are voice and live conferencing. A multimedia can make a audience come
live.

Science and Technology:


Multimedia had a wide application in the field of science and technology . The
multimedia system is capable of transferring audio, and clips in addition to the regular text. It is
even capable of sending message and formatted multimedia documents. At the same time the
multimedia also help in live which is a live interaction through audio messages and it is only
possible with the multimedia. It reduces the time and cost and can be arranged at any moment
even in emergencies. It is enough for communication and meetings. At the same time the
multimedia is enough useful services based on images. Similarly it is useful for surgeons as they
can use images created from imaging scans of human body to practice complicated procedures
such as brain removal and reconstructive surgery. The plans can be made in a better way to
reduce the costs and complication.

Multimedia in software:
Software Engineers may use multimedia in computer from entertainment to designing
digital games; it can be used as a learning process. This multimedia software’s are created by
professionals and software engineers.

Multimedia on the Web:


Offering various online facilities like live TV, Pre-recorded videos, photos, and
animations. Plug-in and Media Players are software programmes that allow us to experience
multimedia on the web. Plug-ins is software programmes that work with web browser to
display multimedia. When web browser encounters a multimedia file it hands off the data to
the plug-in to play (or) display the file. Multimedia players are also software programmes that
can play audio and video files both ON and OFF the web.

Multimedia Authoring

Page 77 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Definition
Multimedia authoring is a process of assembling different types of media contents like
text, audio, image, animations and video as a single stream of information with the help of
various software tools available in the market. Multimedia authoring tools give an integrated
environment for joining together the different elements of a multimedia production. It gives
the framework for organizing and editing the components of a multimedia project. It enables
the developer to create interactive presentation by combining text, audio, video, graphics and
animation.

Features of Authoring Tools


Editing Features:
Most authoring environment and packages exhibit capabilities to create edit and
transform different kinds of media that they support. For example, Macromedia Flash comes
bundled with its own sound editor. This eliminates the need for buying dedicated software to
edit sound data. So authoring systems include editing tools to create, edit and convert
multimedia components such as animation and video clips.

Organizing Features:
The process of organization, design and production of multimedia involve navigation
diagrams or storyboarding and flowcharting. Some of the authoring tools provide a system of
visual flowcharting or overview facility to showcase your project's structure at a macro level.
Navigation diagrams help to organize a project. Many web-authoring programs like
Dreamweaver include tools that create helpful diagrams and links among the pages of a
website.

Visual programming with icons or objects:


It is simplest and easiest authoring process. For example, if you want to play a sound
then just clicks on its icon.

Programming with a scripting language:


Authoring software offers the ability to write scripts for software to build features that
are not supported by the software itself. With script you can perform computational tasks -
sense user input and respond, character creation, animation, launching other application and to
control external multimedia devices.

Document Development tools:


Some authoring tools offers direct importing of pre-formatted text, to index facilities, to
use complex text search mechanism and to use hypertext link-ing tools.

Interactivity Features:
Interactivity empowers the end users to control the content and flow of information of
the project. Authoring tools may provide one or more levels of interactivity.

Page 78 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Simple branching:
Offers the ability to go to another section of the multimedia production

Conditional branching:
Supports a go to base on the result of IF-THEN decision or events

Playback Features:
When you are developing multimedia project, you will continousally assembling
elements and testing to see how the assembly looks and performs. Therefore authoring system
should have playback facility.

Supporting CD-ROM or Laser Disc Sources:


This software allows over all control of CD-drives and Laser disc to integrate audio, video
and computer files. CD-ROM drives, video and laserdisc sources are directly controlled by
authoring programs.

Supporting Video for Windows:


Videos are the right media for your project which are stored on the hard disk. Authoring
software has the ability to support more multimedia elements like video for windows.

Hypertext:
Hypertext capabilities can be used to link graphics, some animation and other text. The
help system of window is an example of hypertext. Such systems are very useful when a large
amount of textual information is to be represented or referenced.

Cross-Platform Capability:
Some authoring programs are available on several platforms and provide tools for
transforming and converting files and programs from one to the other.

Run-time Player for Distribution:


Run time software is often included in authoring software to explain the distribution of
your final product by packaging playback software with content. Some advanced authoring
programs provide special packaging and run-time distribution for use with devices such as CD-
ROM.

Internet Playability:
Due to Web has become a significant delivery medium for multimedia, authoring
systems typically provide a means to convert their output so that it can be delivered within the
context of HTML or DHTML

MIDI
Musical Instrument Digital Interface (MIDI) is a technical protocol that governs the
interaction of digital instruments with computers and with each other. Instead of a direct
musical sound representation, MIDI provides the information on how a musical sound is made
with the help of MIDI commands. The protocol not only provides compactness but also
provides ease in manipulation and modification of notes, along with a flexible choice of
instruments.
MIDI contains information about the pitch, velocity, notation andcontrol signals for
different musical parameters such as vibration, volume,etc. It also contains information for an

Page 79 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

instrument to start and stop a specific note. This information is used by the wavetable of the
receiving musical device to produce the sound waves. As a result, MIDI is more concise than
similar technologies and is asynchronous. The byte is the basic unit of communication for the
protocol, which uses 8-bit serial transmission, with one start and one stop bit. Each MIDI
command has its own unique sequence of bytes.

One of the most common applications of MIDI is in sequencers, which allow a computer
to store, modify, record and play MIDI data. Sequencers use the MIDI format for files because
of their smaller size compared to those produced by other popular data formats. MIDI files,
however, can only be used with MIDI-compatible software or hardware.

Image compression
Image compression is the application of data compression on digital images. In effect,
the objective is to reduce redundancy of the image data in order to be able to store
or transmit data in an efficient form.

Image compression can be lossy or lossless. Lossless compression is sometimes


preferred for artificial images such as technical drawings, icons or comics. This is because lossy
compression methods, especially when used at low bit rates, introduce compression artifacts.
Lossless compression methods may also be preferred for high value content, such as medical
imagery or image scans made for archival purposes. Lossy methods are especially suitable for
natural images such as photos in applications where minor (sometimes imperceptible) loss of
fidelity is acceptable to achieve a substantial reduction in bit rate.

Methods for lossless image compression are:


 Run-length encoding
 Entropy coding
 Adaptive dictionary algorithms such as LZW

Methods for lossy compression:


Reducing the color space to the most common colors in the image. The selected colors
are specified in the color palette in the header of the compressed image. Each pixel just
references the index of a color in the color palette. This method can be combined
with dithering to blur the color borders.

Chroma subsampling:
This takes advantage of the fact that the eye perceives brightness more sharply than
color, by dropping half or more of the chrominance information in the image.

Transform coding:
This is the most commonly used method. A Fourier-related transform such as DCT or
the wavelet transform are applied, followed by quantization and entropy coding.

Fractal compression:
The best image quality at a given bit-rate (or compression rate) is the main goal of
image compression. However, there are other important properties of image compression
schemes:
Scalability:

Page 80 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Scalability generally refers to a quality reduction achieved by manipulation of the bit


stream or file (without decompression and re-compression). Other names for scalability
are progressive coding or embedded bitstreams. Despite its contrary nature, scalability can also
be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially
useful for previewing images while downloading them (e.g. in a web browser) or for providing
variable quality access to e.g. databases. There are several types of scalability:

Quality progressive or layer progressive:


The bitstream successively refines the reconstructed image.

Resolution progressive:
First encode a lower image resolution; then encode the difference to higher resolutions.

Component progressive:
First encode grey; then color.

Region of interest coding:


Certain parts of the image are encoded with higher quality than others. This can be
combined with scalability (encode these parts first, others later).

Meta information:
Compressed data can contain information about the image which can be used to
categorize, search or browse images. Such information can include color and texture statistics,
small preview images and author/copyright information.

Video Compression
Definition - What does video compression mean?
Video compression is the process of encoding a video file in such a way that it consumes
less space than the original file and is easier to transmit over the network/Internet.
It is a type of compression technique that reduces the size of video file formats by
eliminating redundant and non-functional data from the original video file.
Video compression is performed through a video codec that works on one or more
compression algorithms. Usually video compression is done by removing repetitive images,
sounds and/or scenes from a video. For example, a video may have the same background,
image or sound played several times or the data displayed/attached with video file is not that
important. Video compression will remove all such data to reduce the video file size.
Once a video is compressed, its original format is changed into a different format
(depending on the codec used). The video player must support that video format or be
integrated with the compressing codec to play the video file.

Graphics file formats


JPG format
The format known as JPG or JPEG is actually a 1992 published standard (ISO/IEC 10918-
1), which describes different methods for image compression. Since the standard itself does not
contain any provisions on how the image should be saved, an additional format is necessary,
with the JPEG File Interchange Format (JFIF) established as a cross-browser standard.
Alternatives that are rarely used are the Still Picture Interchange File Format (SPIFF) and the
JPEG Network Graphics (JNG) graphic file format.

Page 81 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Compressing JPG format changes the usual structure of pixel graphics by combining 8 x
8 pixels into one block and converting them into a single layer. For example, a color conversion
between the RGB colorspace, YCbCrcolor model, and a low-pass filter (where high frequencies
are filtered out in order to reduce the file size). Depending on the chosen compression level,
this process is associated with a certain loss of quality since not all image information is
retained.

Recommended application scenario:


Storage and publication of photos

PNG format
PNG (Portable Network Graphics), a universally recognized graphic file format
developed by the World Wide Web Consortium (W3C), appeared for the first time in 1996. As a
patent-free and modern alternative to GIF (Graphic Interchange Format), it is characterized by
the possibility of lossless compression as well as a maximum color depth of up to 24 bits per
pixel (16.7 million colors) – or as many as 32 bits with alpha channel. In contrast to GIF,
however, animations can’t be generated with PNG.

The PNG format supports both transparency and semi-transparency (thanks to the
integrated alpha channel), which makes it suitable for all types of images, as well as interlacing,
allowing for an accelerated build-up of the image file during the loading process. The color and
brightness correction mechanisms ensure that PNG image files look the same on different
systems. In order to compress a graphic in PNG format, you can use tools such as the pngcrush.
Due to the loss-free compression process, the files are still comparatively large, which is why
the format is less suitable for displaying photographs than JPG, for example. It also offers the
possibility of reducing the color space (to 1 to 32 bits per pixel).

Recommended application scenario: storing and publishing small images and graphics
(logos, icons, bar charts, etc.), graphics with transparency, loss-free photos

GIF format
The online portal, CompuServe, introduced the Graphics Interchange Format, GIF for
short, in 1987 as a color alternative to X BitMap (XBM)’s black and white format. In contrast to
other solutions such as PCX or MacPaint, the GIF files needed significantly less space thanks to
the efficient LZW compression (data compression with the Lempel-Ziv-Welch algorithm), which
made the format very popular when the internet first took shape. As a format for photos and
graphics, JPG and PNG are now clearly ahead but since version GIF89a (1989), the format has
been able to combine several individual images in a single file, which is why it is still used to
create small animations.

All color information is stored in GIF in a table, the color palette. The table can contain
up to 256 colors (8 bit), which is why the image format is not suitable for displaying
photographs. The information can also be defined as transparent – however, unlike the more
modern PNG, partial transparency is not possible, meaning that a pixel can be either visible or
invisible.

Page 82 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS

Recommended application scenario:

Creating animations; clip art, logos, essentially things where a low color depth isn’t
problematic.

TIFF format
TIFF (Tagged Image File Format) is a graphic file format that is especially used for
transmitting print data and high-resolution images. It was developed as early on as 1986 by
Microsoft in cooperation with Aldus (now belongs to Adobe) and is specially optimized for
embedding color separation and color profiles (ICC profiles) of scanned images. Furthermore,
TIFF supports the CMYK color model and allows a color depth of up to 16 bits for each color
channel (the total color depth is 48 bits). Since 1992, the format has been able to be
compressed loss-free using LZW compressions, which is also used in GIF format.

Thanks to these features, TIFF has become the standard for images where quality plays
a more important role than file size. This is how publishers and print media work with the
image format. The archiving of monochrome graphics e.g. technical drawings, counts as one of
the most versatile applications. GeoTIFF was established with additional tags for saving and
presenting raster-based Geo-information (maps, aerial images, etc.)
Recommended application scenario: transferring high-quality images with high resolution for
printing

BMP format
BMP (Windows Bitmap) was developed for Microsoft and IBM operating systems and
was first released in 1990 with Windows 3.0 as a memory format for pixel graphics with a color
depth of up to 24 bits per pixel. The uncompressed image format assigns exactly one color
value to each pixel, which is why BMP files are very large by default. For this reason, the format
is not suitable for use on the web.

Recommended application scenario:


Saving photos/graphics for offline use

Page 83 of 82

You might also like