Complete Computer Graphics Notes - BCA - (V) Sem
Complete Computer Graphics Notes - BCA - (V) Sem
II SCAN CONVERSION 22
IV CLIPPING 55
V MULTIMEDIA 74
UNIT - I
APPLICATIONS OF COMPUTER GRAPHICS
Presentation Graphics:
For the preparation of reports or summarizing the financial, statistical, mathematical,
scientific, economic data for research reports, managerial reports, moreover creation of bar
graphs, pie charts, time chart, can be doneusing the tools present in computer graphics.
Entertainment:
Computer graphics finds a major part of its utility in the movie industry and game
industry. Used for creating motion pictures, music video, television shows, cartoon animation
films. In the game industry where focus and interactivity are the key players, computer graphics
helps in providing such features in the efficient way.
Education:
Computer generated models are extremely useful for teaching huge number of
concepts and fundamentals in an easy to understand and learn manner. Using computer
graphics many educational models can be created through which more interest can be
generated among the students regarding the subject.
Training:
Specialized system for training like simulators can be used for training the candidates in
a way that can be grasped in a short span of time with better understanding. Creation of
training modules using computer graphics is simple and very useful.
Graphic operations:
A general purpose graphics package provides user with Varity of function for creating
and manipulating pictures.
The basic building blocks for pictures are referred to as output primitives. They includes
character, string, and geometry entities such as point, straight lines, curved lines, filled
areas and shapes defined with arrays of colorpoints.
Input functions are used for control & process the various input device such as mouse,
tablet,etc.
Control operations are used to controlling and housekeeping tasks such as clearing
display screenetc.
All such inbuilt function which we can use for our purpose are known as
graphicsfunction
Software Standard
Primary goal of standardize graphics software is portability so that it can be used in any
hardware systems & avoid rewriting of software program for differentsystem
Some of these standards are discussed below
Output primitives
Points and lines
Point plotting is done by converting a single coordinate position furnished by an
application program into appropriate operations for the output device inuse.
Line drawing is done by calculating intermediate positions along the line path between
two specified endpointpositions.
The output device is then directed to fill in those positions between the endpoints with
somecolor.
For some device such as a pen plotter or random scan display, a straight line can be
drawn smoothly from one end point together.
Digital devices display a straight line segment by plotting discrete points between the
twoendpoints.
Discrete coordinate positions along the line path are calculated from the equation of
theline.
For a raster video display, the line intensity is loaded in frame buffer at the
corresponding pixel positions.
Reading from the frame buffer, the video controller then plots the screenpixels.
Screen locations are referenced with integer values, so plotted positions may only
approximate actual line positions between two specifiedendpoints.
For example line position of (12.36, 23.87) would be converted to pixel position (12,24).
Thisroundingofcoordinatevaluestointegerscauseslinestobedisplayed
Withastairstepappearance(“the jaggies”), as represented in fig 2.1.
Fig. 2.1: − Stair step effect produced when line is generated as a series of pixel positions.
The stair step shape is noticeable in low resolution system, and we can improve their
appearance somewhat by displaying them on high resolutionsystem.
More effective techniques for smoothing raster lines are based on adjusting pixel
intensities along the linepaths.
For raster graphics device−level algorithms discuss here, object positions are specified
directly in integer devicecoordinates.
Pixel position will referenced according to scan−line number and column number which
is illustrated by followingfigure.Pixel positions referenced by scan−line number and
column number.
To load the specified color into the frame buffer at a particular position, we will assume
we have available low−level procedure of the form setpixel(x,y).
Similarly for retrieve the current frame buffer intensity we assume to have procedure
getpixel(x,y).
2
0 1 2 3 4 5 6
Graphics Packages:
There are mainly two types of graphicspackages:
1. General programmingpackage
2. Special−purpose applicationpackage
Input Devices:
The Input Devices are the hardware that is used to transfer transfers input to the
computer. The data can be in the form of text, graphics, sound, and text. Output
device display data from the memory of the computer. Output can be text, numeric data, line,
polygon, and other objects.
Keyboard:
The most commonly used input device is a keyboard. The data is entered by pressing
the set of keys. All keys are labelled. A keyboard with 101 keys is called a QWERTY
keyboard.The keyboard has alphabetic as well as numeric keys. Some special keys are also
available.
1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3 ...F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and usedfor fast entry of
numeric data.
Function of Keyboard:
1. Alphanumeric Keyboards are used in CAD. (Computer Aided Drafting)
2. Keyboards are available with special features line screen co-ordinates entry, Menu
selection or graphics functions, etc.
3. Special purpose keyboards are available having buttons, dials, and switches. Dials are
used to enter scalar values. Dials also enter real numbers. Buttons and switches are
used to enter predefined function values.
Mouse:
A Mouse is a pointing device and used to position the pointer on the screen. It is a small
palm size box. There are two or three depression switches on the top. The movement of the
mouse along the x-axis helps in the horizontal movement of the cursor and the movement
along the y-axis helps in the vertical movement of the cursor on the screen. The mouse cannot
be used to enter text. Therefore, they are used in conjunction with a keyboard.
Trackball
It is a pointing device. It is similar to a mouse. This is mainly used in notebook or laptop
computer, instead of a mouse. This is a ball which is half inserted, and by changing fingers on
the ball, the pointer can be moved.
Space ball:
It is similar to trackball, but it can move in six directions where trackball can move in
two directions only. The movement is recorded by the strain gauge. Strain gauge is applied with
pressure. It can be pushed and pulled in various directions.The ball has a diameter around 7.5
cm. The ball is mounted in the base usingrollers. One-third of the ball is an inside box, the rest
is outside.
Applications:
1. It is used for three-dimensional positioning of the object.
2. It is used to select various functions in the field of virtual reality.
3. It is applicable in CAD applications.
4. Animation is also done using spaceball.
5. It is used in the area of simulation and modeling.
Joystick:
A Joystick is also a pointing device which is used to change cursor position on a monitor
screen. Joystick is a stick having a spherical ball as it’s both lower and upper ends as shown in
fig.
The lower spherical ball moves in a socket. The joystick can be changed in all four
directions. The function of a joystick is similar to that of the mouse. It is mainly used in
Computer Aided Designing (CAD) and playing computer games.
Light Pen:
Light Pen (similar to the pen) is a pointing device which is used to select adisplayed
menu item or draw pictures on the monitor screen.It consists of a photocell and an optical
system placed in a small tube. When its tip is moved over the monitor screen, and pen button is
pressed, its photocell sensing element detects the screen location and sends the corresponding
signals to the CPU.
Uses:
1. Light Pens can be used as input coordinate positions by providing necessary
arrangements.
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics system.
4. It can be used as stroke input devices.
5. It can be used as valuators
Digitizers:
The digitizer is an operator input device, which contains a large, smooth board (the
appearance is similar to the mechanical drawing board) & an electronic trackingdevice, which
can be changed over the surface to follow existing lines. The electronic tracking device contains
a switch for the user to record the desire x & y coordinate positions. The coordinates can be
entered into the computer memory or stored or an off-line storage medium such as magnetic
tape.
Touch Panels:
Touch Panels is a type of display screen that has a touch-sensitivetransparent panel
covering the screen. A touch screen registers input when a finger or other object comes
in contact with the screen.
When the wave signals are interrupted by some contact with the screen, that located is
recorded.
Touch screens have long been used in military applications.
enter data. These systems operate by matching an input against a predefined dictionary
of words and phrases.
Image Scanner
It is an input device. The data or text is written on paper. The paper is fed to scanner.
The paper written information is converted into electronic format; this format is stored
in the computer. The input documents can contain text, handwritten material, picture
extra.
By storing the document in a computer document became safe for longer period of
time. The document will be permanently stored for the future. We can change the
document when we need. The document can be printed when needed.
Scanning can be of the black and white or colored picture. On stored picture 2D or 3D
rotations, scaling and other operations can be applied.
Impact Printers
In impact printers, there is a physical contact established between the print head,
ribbon, ink-cartridge, and paper.
The printers hit print head on an ink-filled ribbon than the letter prints on the paper.
Impact printers are works like a typewriter.
These printers have three types:
Daisy Wheel Printers
Drum Printers
Dot Matrix Printer
Types of Printers:
Advantages:
1. More reliable
2. Better printing Quality
Disadvantages:
1. Slow than Dot Matrix
2. More Expensive
3. Noisy in operation
Drum Printers:
It has a shape like a drum, so it is called “Drum Printer.” This type of printercontains
many characters that are printed on the drum.The surface of the drum is break down into the
number of tracks. Total tracks are equal to character132. A drum will have 132 tracks. The
number of tracks is divided according to the width of the paper.It can print approx. 150-2500
lines per minute.
Drum Printer
Advantages:
1. High Speed
2. Low Cost
Disadvantages:
1. Poor Printing Quality
2. Noisy in Operation
Advantages:
1. Low Printing Cost
2. Large print size
3. Long Life
Disadvantages:
1. Slow speed
2. Low Resolution
Non-impact Printers
In Non-impact printers, there is no physical contact between the print head or paper head. A
non-impact printer prints a complete page at a time. The Non-impact printers spray ink on the
paper through nozzles to form the letters and patterns.The printers that print the letters
without the ribbon and on papers are called Non-impact printer. Non-impact printers are also
known as “Page Printer.”
1. Inkjet Printer:
It is also called “Deskjet Printer.” It is a Non-impact printer in which the letters and
graphics are printed by spraying a drop of ink on the paper with nozzle head.
A Color inkjet printer has four ink nozzles, sapphire, red, yellow, and black, so it is also
called CMYK printer. We can produce any color by using these four colors.The prints and
graphics of this printer are very clear. These printers are generally used for home purposes.
Inkjet Printer
Advantages:
1. High-Quality Printout
2. Low noise
3. High Resolution
Disadvantages:
1. Less Durability of the print head
2. Not suitable for high volume printing
3. Cartridges replacement is expensive
2. Laser Printer:
It is also called “Page Printer” because a laser printerprocess and store the whole page
before printing it. The laser printer is used to produce high-quality images and text. Mostly it is
used with personal computers. The laser printers are mostly preferred to print a large amount
of content on paper.
Laser printer
Advantages:
1. High Resolution
2. High printing Speed
3. Low printing Cost
Disadvantages:
1. Costly than an inkjet printer
2. Larger and heavier than an inkjet printer
Plotters:
A plotter is a special type of output device. It is used to print large graphs, large designs
on a large paper. For Example: Construction maps, engineering drawings, architectural plans,
and business charts, etc.It was invented by “Remington rand” in 1953.It is similar to a printer,
but it is used to print vector graphics.
Types of Plotter:
1. Flatbed Plotter:
In a flatbed plotter, the paper is kept in a stationary position on a table or a tray. A
flatbed plotter has more than one pen and a holder. The pen rotates on the paper upside-down
and right-left by the using of a motor.Every pen has a different color ink, which is used to draw
the multicolor design.We can quickly draw the following designs by using a flatbed printer.
For Example: Cars, Ships, Airplanes, Dress design, road and highway blueprints,etc.
A Flatbed Plotter
Advantages of Flatbed Plotter
1. Larger size paper can be used
2. Drawing Quality is similar to an expert
Drum Plotter:
It is also called “Roller plotter.” There is a drum in this plotter.We can apply the paper on
the drum. When the plotter works, these drums moves back and forth, and the image is drawn.Drum
plotter has more than one pen and penholders. The pens easily moves right to left and left to right.The
movement of pens and drums are controlled by graph plotting program.It is used in industry to produce
large drawings (up to A0).
A Drum Plotter
Construction of a CRT:
1. The primary components are the heated metal cathode and a control grid.
2. The heat is supplied to the cathode (by passing current through the filament).
This way the electrons get heated up and start getting ejected out of the cathode
filament.
3. This stream of negatively charged electrons is accelerated towards the phosphor screen
by supplying a high positive voltage.
4. This acceleration is generally produced by means of an acceleratinganode.
5. Next component is the Focusing System, which is used to force the electron beam to
converge to small spot on the screen.
6. If there will not be any focusing system, the electrons will be scatteredbecause of their
own repulsions and hence we won’t get a sharp image of the object.
7. This focusing can be either by means of electrostatic fields or magnetic fields.
Types of Deflection:
1. Electrostatic Deflection:
The electron beam (cathode rays) passes through a highly positively charged
metal cylinder that forms an electrostatic lens. This electrostatic lens focuses the cathode rays
to the center of the screen in the same way like an optical lens focuses the beam of light. Two
pairs of parallel plates are mounted inside the CRT tube.
2. Magnetic Deflection:
Here, two pairs of coils are used. One pair is mounted on the top and bottom of the CRT
tube, and the other pair on the two opposite sides. The magnetic field produced by both these
pairs is such that a force is generated on the electron beam in a direction which is
perpendicular to both the direction of magnetic field, and to the direction of flow of the beam.
One pair is mounted horizontally and the other vertically.
Different kinds of phosphors are used in a CRT. The difference is basedupon the time for
howlong the phosphor continues to emit light after the CRT beam has been removed.
This property is referred to as Persistence.
The number of points displayed on a CRT is referred to as resolutions (eg. 1024x768).
Raster-Scan
The electron beam is swept across the screen one row at a time from top to bottom. As
it moves across each row, the beam intensity is turned on and off to create a pattern of
illuminated spots. This scanning process is called refreshing.
Each complete scanning of a screen is normally called a frame. Therefreshing rate, called
the frame rate, is normally 60 to 80 frames per second, or described as 60 Hz to 80 Hz.
Picture definition is stored in a memory area called the frame buffer.
This frame buffer stores the intensity values for all the screen points. Each screen point
is called a pixel (picture element or pel).
On black and white systems, the frame buffer
storing the values of the pixels is called a bitmap. Each entry in the bitmap is a 1-bit data
which determine the on (1) and off (0) of the intensity of the pixel.
On color systems, the frame buffer storing the values of the pixels is called a
pixmap(Though nowadays many graphics libraries name it as bitmap too). Each entry in the
pixmap occupies a number of bits to represent the color of the pixel. For a true color display,
the number of bits for each entry is 24 (8 bits per red/green/blue channel, each channel 28
=256 levels of intensity value, ie. 256 voltage settings for each of the red/green/blue electron
guns).
Random-scan generally have higher resolution than raster systems and can produce
smooth line drawings, however it cannot display realistic shaded scenes.
color depends on how far the electron beam penetrates the phosphor layers. This method
produces four colors only, red, green, orange and yellow. A beam of slow electrons excites the
outer red layer only; hence screen shows red color only. A beam of high-speed electrons excites
the inner green layer. Thus screen shows a green color.
Advantages:
1. Inexpensive
Disadvantages:
1. Only four colors are possible
2. Quality of pictures is not as good as with another method.
2. Shadow-Mask Method:
Shadow Mask Method is commonly used in Raster-Scan System because they produce a
much wider range of colors than the beam-penetration method.
It is used in the majority of color TV sets and monitors.
Construction:
A shadow mask CRT has 3 phosphor color dots at each pixel position.
One phosphor dot emits: red light
Another emits: green light
Third emits: blue light
This type of CRT has 3 electron guns, one for each color dot and a shadow mask grid just
behind the phosphor coated screen.Shadow mask grid is pierced with small round holes in a
triangular pattern.
Working:
The deflection system of the CRT operates on all 3 electron beams simultaneously; the 3
electron beams are deflected and focused as a group onto the shadow mask, which
contains a sequence of holes aligned with the phosphor- dot patterns.
When the three beams pass through a hole in the shadow mask, they activate a
dottedtriangle, which occurs as a small color spot on the screen.
The phosphor dots in the triangles are organized so that each electron beam can
activate only its corresponding color dot when it passes through the shadow mask.
Advantage:
1. Realistic image
2. Million different colors to be generated
3. Shadow scenes are possible
Disadvantage:
1. Relatively expensive compared with the monochrome CRT.
2. Relatively poor resolution
3. Convergence Problem
Function of guns:
Two guns are used in DVST:
1. Primary guns: It is used to store the picture pattern.
2. Flood gun or Secondary gun: It is used to maintain picture display.
Advantage:
1. No refreshing is needed.
2. High Resolution
3. Cost is very less
Disadvantage:
1. It is not possible to erase the selected part of a picture.
2. It is not suitable for dynamic graphics applications.
3. If a part of picture is to modify, then time is consumed.
Example: Small T.V. monitor, calculator, pocket video games, laptop computers, an
advertisement board in elevator.
Emissive Display:
The emissive displays are devices that convert electrical energy into light.Examples are
Plasma Panel, thin film electroluminescent display and LED (Light Emitting Diodes).
Non-Emissive Display:
The Non-Emissive displays use optical effects to convert sunlight or light from some
other source into graphics patterns. Examples are LCD (Liquid Crystal Device).
Each cell of plasma has two states, so cell is said to be stable. Displayable point in plasma panel
is made by the crossing of the horizontal and vertical grid. The resolution of the plasma panel
can be up to 512 * 512 pixels.
Advantage:
1. High Resolution
2. Large screen size is also possible.
3. Less Volume
4. Less weight
5. Flicker Free Display
Disadvantage:
1. Poor Resolution
2. Wiring requirement anode and the cathode is complex.
3. Its addressing is also complex.
Advantage:
1. Low power consumption.
2. Small Size
3. Low Cost
Disadvantage:
1. LCDs are temperature-dependent (0-70°C)
2. LCDs do not emit light; as a result, the image has very little contrast.
3. LCDs have no color capability.
4. The resolution is not as good as that of a CRT.
Base of
Raster Scan System Random Scan System
Differences
Theelectronbeamissweptacrossthes The electron beam is directed only to
Electron
creen,one row at a time, from top thepartsof screen where a picture is to be
Beam to bottom. drawn.
Its resolution is poor because raster Its resolution is good because
systemincontrast produces zig-zag thissystemproduces smooth lines drawings
Resolution lines that are plotted as discrete because CRT beam directly follows the line
pointsets. path.
Picturedefinitionisstoredasasetofint Picture definition is stored as a set of line
Picture ensityvalues for all screen points, drawing instructions in a display file.
Definition called pixels in a refresh buffer area.
UNIT - II
SCAN CONVERSION
Definition
It is a process of representing graphics objects a collection of pixels. The graphics objects
are continuous. The pixels used are discrete. Each pixel can have either on or off state.
The circuitry of the video display device of the computer is capable of converting binary
values (0, 1) into a pixel on and pixel off information. 0 is represented bypixel off. 1 is
represented using pixel on. Using this ability graphics computer represent picture having
discrete dots
Any model of graphics can be reproduced with a dense matrix of dots or points. Most
human beings think graphics objects as points, lines, circles, ellipses. For generating graphical
object, many algorithms have been developed.
m=
yi+1-yi=∆y ..................... equation 3
yi+1-xi=∆x. ................... equation 4
yi+1=yi+∆y
∆y=m∆x
yi+1=yi+m∆x
∆x=∆y/m
xi+1=xi+∆x
xi+1=xi+∆y/m
Case1: When |m|<1 then (assume that x1<x2)
x= x1,y=y1 set ∆x=1
yi+1=y1+m, x=x+1
Until x = x2
Case2: When |m|>1 then (assume that y1<y2)
x= x1,y=y1 set ∆y=1
xi+1= , y=y+1
Until y → y2
Advantage:
1. It is a faster method than method of using direct use of line equation.
2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice
is not possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.
Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations of round off
error cause accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suitedfor
hardware implementation.
DDA Algorithm:
Step1: Start Algorithm
Step2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.
Step3: Enter value of x1,y1,x2,y2.
Step4: Calculate dx = x2-x1
Step5: Calculate dy = y2-y1
m=
3. If S is chosen
We have xi+1=xi+1 and yi+1=yi
If T is chosen
We have xi+1=xi+1 and yi+1=yi+1
= 2y - 2yi -1
Bresenham's Line Algorithm:
Step1: Start Algorithm
Step2: Declare variable x1,x2,y1,y2,d,i1,i2,dx,dy
Step3: Enter value of x1,y1,x2,y2
Where x1,y1are coordinates of starting point
And x2,y2 are coordinates of Ending point
Step4: Calculate dx = x2-x1
Calculate dy = y2-y1
Calculate i1=2*dy
Calculate i2=2*(dy-dx)
Calculate d=i1-dx
Step5: Consider (x, y) as starting point and xendas maximum possible value of x.
If dx < 0
Then x = x2
y = y2
xend=x1
If dx > 0
Then x = x1
y = y1
xend=x2
Step6: Generate point at (x,y)coordinates.
Step7: Check if whole line is generated.
If x > = xend
Stop.
Step8: Calculate co-ordinates of the next pixel
If d < 0
Thend = d + i1
If d ≥ 0
Then d = d + i2
Increment y = y + 1
Step9: Increment x = x + 1
Step10: Draw a point of latest (x, y) coordinates
Step11: Go to step 7
Step12: End of Algorithm
Example: Starting and Ending position of the line are (1, 1) and (8, 5). Find intermediate points.
Solution: x1=1
y1=1
x2=8
y2=5
dx= x2-x1=8-1=7
dy=y2-y1=5-1=4
I1=2* ∆y=2*4=8
I2=2*(∆y-∆x)=2*(4-7)=-6
d = I1-∆x=8-7=1
x y d=d+I1 or I2
1 1 d+I2=1+(-6)=-5
2 2 d+I1=-5+8=3
3 2 d+I2=3+(-6)=-3
4 3 d+I1=-3+8=5
5 3 d+I2=5+(-6)=-1
6 4 d+I1=-1+8=7
7 4 d+I2=7+(-6)=1
8 5
Advantage:
1. It involves only integer arithmetic, so it is simple.
2. It avoids the generation of duplicate points.
3. It can be implemented using hardware because it does not use multiplicationand
division.
4. It is faster as compared to DDA (Digital Differential Analyzer) because it does not involve
floating point calculations like DDA Algorithm.
Disadvantage:
This algorithm is meant for basic line drawing only Initializing is not a part of
Bresenham's line algorithm. So to draw smooth lines, you should want to look into a
different algorithm.
1. DDA Algorithm use floating point, i.e., 1. Bresenham's Line Algorithm use fixed
RealArithmetic. point, i.e., Integer Arithmetic
3. DDA Algorithm is slowly than Bresenham'sLine 3. Bresenham's Algorithm is faster than DDA
Algorithm in line drawing because ituses real Algorithm in line because itinvolves only
arithmetic (Floating Point operation) addition & subtraction in its calculation and
uses only integer arithmetic.
4. DDA Algorithm is not accurate and efficient as 4. Bresenham's Line Algorithm is more
Bresenham's Line Algorithm. accurate and efficient at DDA Algorithm.
5.DDA Algorithm can draw circle and curves but 5. Bresenham's Line Algorithm can draw
are not accurate as Bresenham's Line Algorithm circle and curves with more accurate
than DDA Algorithm.
Polynomial Method:
The first method defines a circle with the second-order polynomial equation as shown in fig:
y2=r2-x2
Where x = the x coordinate
y = the y coordinate
r = the circle radius
With the method, each x coordinate in the sector, from 90° to 45°, is found by stepping x from 0
Algorithm:
Step1: Set the initial variables
r = circle radius
(h, k) = coordinates of circle center
x=o
I = step size
xend=
Step2: Test to determine whether the entire circle has been scan-converted.
If x >xend then stop.
Step3: Compute y =
Step4: Plot the eight points found by symmetry concerning the center (h, k) at the current (x, y)
coordinates.
Plot (x + h, y +k) Plot (-x + h, -y + k)
Plot (y + h, x + k) Plot (-y + h, -x + k)
Plot (-y + h, x + k) Plot (y + h, -x + k)
Plot (-x + h, y + k) Plot (x + h, -y + k)
Step5: Increment x = x + i
Bresenham’s Algorithm
We cannot display a continuous arc on the raster display. Instead, we have to choose
the nearest pixel position to complete the arc.
From the following illustration, you can see that we have put the pixel at X,YX,Y location
and now need to decide where to put the next pixel − at N X+1,YX+1,Y or at S X+1,Y−1X+1,Y−1.
Algorithm:
Step 1 - Get the coordinates of the center of the circle and radius, and store them in x,
yand R respectively. Set P=0 and Q=R.
Step 2 − Set decision parameter D = 3 – 2R.
Step 3 − Repeat through step-8 while P ≤ Q.
Step 4 − Call Draw Circle X,Y,P,QX,Y,P,Q.
Step 5 − Increment the value of P.
Step 6 − If D < 0 then D = D + 4P + 6.
Step 7 − Else Set R = R - 1, D = D + 4P−QP−Q + 10.
Step 8 − Call Draw Circle X,Y,P,QX,Y,P,Q.
Instead of relying on the boundary of the object, it relies on the fill color. In other words,
it replaces the interior color of the object with the fill color. When no more pixels of the
original interior color exist, the algorithm is completed.
Once again, this algorithm relies on the Four-connect or Eight-connect method of filling
in the pixels. But instead of looking for the boundary color, it is looking for all adjacent pixels
that are a part of the interior.
4-Connected Polygon
In this technique 4-connected pixels are used as shown in the figure. We are putting the
pixels above, below, to the right, and to the left side of the current pixels and this process will
continue until we find a boundary with different color.
Algorithm
Step 1 − Initialize the value of seed point seedx,seedyseedx,seedy, fcolor anddcol.
Step 2 − Define the boundary values of the polygon.
Step 3 − Check if the current seed point is of default color, then repeat the steps 4 and
5 till the boundary pixels reached. If get pixel (x,y) = dcol then repeat step 4 and 5
Step 4 − Change the default color with the fill color at the seed point.
SetPixel(seedx, seedy, fcol)
Step 5 − Recursively follow the procedure with four neighbourhood points.
FloodFill (seedx – 1, seedy, fcol, dcol)
FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)
Step 6 − Exit
There is a problem with this technique. Consider the case as shown below where we
tried to fill the entire region. Here, the image is filled only partially. In such cases, 4-connected
pixels technique cannot be used.
8- Connected Polygon
In this technique 8-connected pixels are used as shown in the figure. We are putting
pixels above, below, right and left side of the current pixels as we weredoing in 4-connected
technique.
In addition to this, we are also putting pixels in diagonals so that entire area of the
current pixel is covered. This process will continue until we find a boundary with different color.
Algorithm
Step 1 − Initialize the value of seed point seedx,seedyseedx,seedy, fcolorand dcol.
Step 2 − Define the boundary values of the polygon.
Step 3 − Check if the current seed point is of default color then repeat the steps 4 and 5 till the
boundary pixels reachedIf getpixel(x,y) = dcol then repeat step 4 and 5
Step 4 − Change the default color with the fill color at the seed point.
SetPixel(seedx, seedy, fcol)
Step 5 − Recursively follow the procedure with four neighbourhood points
Step 6 − Exit
The 4-connected pixel technique failed to fill the area as marked in the following figure
which won’t happen with the 8-connected technique.
Inside-outside Test
This method is also known as counting number method. While filling an object, we
often need to identify whether particular point is inside the object or outside it. There are two
methods by which we can identify whether particular point is inside an object or outside.
Odd-Even Rule
Nonzero winding number rule
Odd-Even Rule
In this technique, we count the edge crossing along the line from any point x,yx,y to
infinity. If the number of interactions is odd then the point x,yx,y is an interior point. If the
number of interactions is even then point x,yx,y is an exterior point. Here is the example to give
you the clear idea −
From the above figure, we can see that from the point x,yx,y, the number of interactions
point on the left side is 5 and on the right side is 3. So the total number of interaction point is
8, which is odd. Hence, the point is considered within the object.
When all the edges of the polygon are covered by the rubber band, check out the pin
which has been fixed up at the point to be test. If we find at least one wind at the point
consider it within the polygon, else we can say that the point is not inside the polygon.
In another alternative method, give directions to all the edges of the polygon.Draw a
scan line from the point to be test towards the left most of X direction.
Give the value 1 to all the edges which are going to upward direction and all other -1 as
direction values.
Check the edge direction values from which the scan line is passing andsum up them.
If the total sum of this direction value is non-zero, then this point to be tested is
an interior point, otherwise it is an exterior point.
In the above figure, we sum up the direction values from which the scan lineis passing
then the total is 1 – 1 + 1 = 1; which is non-zero. So the point is said to be an interior
point.
UNIT – III
TWO DIMENSIONAL TRANSFORMATIONS
Basic Transformation
Translation
- T(tx,ty)
- TranslationdistancesScale
- S(sx,sy)
- Scalefactors
Rotation
- R()
- Rotationangle
Translation
A translation is applied to an object by representing it along a straight line path
from one coordinate location to another adding translation distances, tx, ty to original
coordinate position (x,y) to move the point to a new position (x’,y’) tox’ = x + tx, y’ = y + ty
Moving a polygon from one position to another position with the translation vector
(-5.5, 3.75)
Rotations:
A two-dimensional rotation is applied to an object by repositioning it along a
circular path on xy plane. To generate a rotation, specify a rotation angle θ and the
position (xr, yr) of the rotation point (pivot point) about which the object is to be rotated.
Positive values for the rotation angle define counter clock wise rotation about pivot
point. Negative value of angle rotates objects in clock wise direction. The transformation
can also be described as a rotation about a rotation axis perpendicular to xy plane and
passes through pivot point.
Rotation of a point from position (x, y) to position (x’, y’) through angle θ relative to
coordinate origin
The transformation equations for rotation of a point position P when the pivot point is at
coordinate origin. In figure r is constant distance of the point positions Ф is the original
angular of the point from horizontal and θ is the rotation angle.The transformed
coordinates in terms of angle θ and Ф
x’ = rcos(θ+Ф) = rcosθosФ – rsinθsinФ
y’ = rsin(θ+Ф) = rsinθcosФ + rcosθsinФ
Rotation Equation
𝑐𝑜𝑠∅ −𝑠𝑖𝑛∅
Rotation Matrix R= P’ = R
𝑠𝑖𝑛∅ 𝑐𝑜𝑠∅
Note: Positive values for the rotation angle define counterclockwise rotations about the
rotation point and negative values rotate objects in the clockwise.
Scaling
A scaling transformation alters the size of an object. This operation can be carried
out for polygons by multiplying the coordinate values (x, y) to each vertex by scaling factor
Sx&Sy to produce the transformed coordinates (x’, y’) scaling factor Sx scales object in x
direction while Sy scales in y direction. The transformation equation in matrix form
x’=x.Sx y’ =y.Sy
𝑥′ 𝑆𝑥 0 𝑥
𝑦′= 0 𝑆𝑦 𝑦(or)
.
P’ = S. P
Turning a square (a) Into a rectangle (b) with scaling factors sx = 2 and sy = 1.Any
positive numeric values are valid for scaling factors sx and sy. Values less than 1 reduce the
size of the objects and values greater than 1 produce an enlarged object.
To get uniform scaling it is necessary to assign same value for sx and sy. Unequal
values for sx and sy result in a non uniform scaling.
For Translation
For Scaling
For Rotation
Composite Transformations
Acompositetransformationisasequenceoftransformations;onefollowedbytheother.
Wecansetupamatrixforanysequenceof transformations as a composite transformation
matrix by calculating the matrix product of the individual transformations
Translation
Iftwosuccessivetranslationvectors(tx1,ty1)and(tx2,ty2)areappliedtoacoordinateposit
ionP,thefinaltransformedlocationP’is calculatedas
Rotations
Two successive rotations applied to point P produce the transformed position
P’ = R(θ2).{R(θ1).P} = {R(θ2).R(θ1)}.P
By multiplying the two rotation matrices, we can verify that two successive rotation are
additive
R(θ2).R(θ1) = R(θ1 + θ2)
So that the final rotated coordinates can be calculated with the composite rotation matrix
asP’ = R(θ1 + θ2).P
Scaling
Concatenating transformation matrices for two successive scaling operations produces the
following composite scaling matrix
Translate the object so that the pivot point is returned to its original position
The composite transformation matrix for this sequence is obtain with the concatenation
Which can also be expressed as T(xr, yr).R(θ).T(-xr, -yr) = R(xr, yr, θ)
Use the inverse translation of step 1 to return the object to its original position
Concatenating the matrices for these three operations produces the required
scaling matrixCan also be expressed as T(xf, yf).S(sx, sy).T(-xf, -yf) = S(xf, yf, sx, sy)
Other Transformations
1. Reflection
2. Shear
Reflection:
A reflection is a transformation that produces a mirror image of an object. The
mirror image for a two-dimensional reflection is generated relative to an axis of reflection
by We can choose an axis of reflection in the xy plane or perpendicular to the xy plane or
coordinate origin.
Reflection about the diagonal line y = x is accomplished with the transformation matrix
Reflection axis as the diagonal line y = -x
Reflection about the diagonal line y = -x is accomplished with the transformation matrix
Shear
A Transformation that slants the shape of an object is called the shear
transformation.Two common shearing transformations
areused.Oneshiftsxcoordinatevaluesandothershiftyco-
ordinatevalues.Howeverinboththecasesonlyoneco-ordinate(xory)changes its coordinates
and other preserves itsvalues.
X - Shear
The x shear preserves the y coordinates, but changes the x values which cause
vertical lines to tilt right or left as shown in figure
The Transformations matrix for x-shear is
Page 45 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Page 46 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Example x’ = x
Y’ = shy (x – x ref) + y
Shy = ½ and x ref = -1
3D Transformation:
In Computer graphics, Transformation is a process of modifying and re-positioning the
existing graphics.
Transformation Techniques:
In computer graphics, various transformation techniques are
1. Translation
2. Rotation
3. Scaling
4. Reflection
5. Shear
Page 47 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Consider a point object O has to be moved from one position to another in a 3D plane.
Let
Initial coordinates of the object O = (Xold, Yold, Zold)
New coordinates of the object O after translation = (Xnew, Ynew , Zold)
Translation vector or Shift vector = (Tx, Ty, Tz)
This translation is achieved by adding the translation coordinates to the old coordinates of the
object as-
Xnew = Xold + Tx (This denotes translation towards X axis)
Ynew = Yold + Ty (This denotes translation towards Y axis)
Znew = Zold + Tz (This denotes translation towards Z axis)
Page 48 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Let-
Initial coordinates of the object O = (Xold, Yold, Zold)
Initial angle of the object O with respect to origin = Φ
Rotation angle = θ
New co-ordinates of the object O after rotation = (Xnew, Ynew, Znew)
Page 49 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Ynew = Yold
Znew = Yold x cosθ – Xold x sinθ
Page 50 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Page 51 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Page 52 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
In a three dimensional plane, the object size can be changed along X direction, Y
direction as well as Z direction.So, there are three versions of shearing:
1. Shearing in X direction
2. Shearing in Y direction
3. Shearing in Z direction
Shearing in X Axis-
Shearing in X axis is achieved by using the following shearing equations-
Xnew = Xold
Ynew = Yold + Shy x Xold
Znew = Zold + Shz x Xold
Page 53 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Shearing in Y Axis-
Shearing in Y axis is achieved by using the following shearing equations:
Xnew = Xold + Shx x Yold
Ynew = Yold
Znew = Zold + Shz x Yold
Shearing in Z Axis-
Shearing in Z axis is achieved by using the following shearing equations-
Xnew = Xold + Shx x Zold
Ynew = Yold + Shy x Zold
Znew = Zold
Page 54 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Page 55 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
UNIT – IV
CLIPPING
Viewing and clipping
The process of selecting and viewing the picture with different views is called windowing
and a process which divides each element of the picture into its visible and invisible portions,
allowing the invisible portion to be discarded is calledclipping.
Viewing Transformation
Once object description has been transmitted to the viewing reference frame, we
choose the window extends in viewing coordinates and selects the viewport limits in
normalized coordinates.
Page 56 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
In order to maintain the same relative placement of the point in the viewport as in the window,
we require:
Solving these impressions for the viewport position (xv, yv), we have
xv=xvmin+(xw-xwmin)sx
yv=yvmin+(yw-ywmin)sy .......... equation 2
Where scaling factors are
Equation (1) and Equation (2) can also be derived with a set of transformation that converts the
window or world coordinate area into the viewport or screen coordinate area. This
conversation is performed with the following sequence of transformations:
1. Perform a scaling transformation using a fixed point position (xwmin,ywmin) that scales
the window area to the size of the viewport.
2. Translate the scaled window area to the position of the viewport. Relative proportions
of objects are maintained if the scaling factors are the same (sx=sy).
From normalized coordinates, object descriptions are mapped to the various display
devices.Any number of output devices can we open in a particular app, and three windows to
viewport transformation can be performed for each open output device.This mapping called
workstation transformation (It is accomplished by selecting a window area in normalized space
and a viewport area in the coordinates of the display device).As in fig, workstation
transformation to partition a view so that different parts of normalized space can be displayed
on various output devices).
Page 57 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Viewing Transformation= T * S * T1
Page 58 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Line Clipping:
It is performed by using the line clipping algorithm.
1. Visible:
If a line lies within the window, i.e., both endpoints of the line lies within the window. A
line is visible and will be displayed as it is.
2. Not Visible:
If a line lies outside the window it will be invisible and rejected. Such lines will not
display. If any one of the following inequalities is satisfied, then the line is considered invisible.
Let A (x1,y2) and B (x2,y2) are endpoints of line.
3. Clipping Case:
Page 59 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
If the line is neither visible case nor invisible case. It is considered to be clipped case.
First of all, the category of a line is found based on nine regions given below. All nine regions
are assigned codes. Each code is of 4 bits. If both endpoints of the line have end bits zero, then
the line is considered to be visible.
The centre area is having the code, 0000, i.e., region 5 is considered a rectangle window.
Page 60 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Step4:If a line is clipped case, find an intersection with boundaries of the window
m=(y2-y1 )(x2-x1)
Page 61 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
It is used for clipping line. The line is divided in two parts. Mid points of line is obtained
by dividing it in two short segments. Again division is done, by finding midpoint. This process is
continued until line of visible and invisible category is obtained. Let (xi,yi) are midpoint.
Step5: Check each midpoint, whether it nearest to the boundary of a window or not.
Page 62 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Step6: If the line is totally visible or totally rejected not found then repeat step 1 to 5.
Example: Window size is (-3, 1) to (2, 6). A line AB is given having co-ordinates of A (-4, 2) and B
(-1, 7). Does this line visible. Find the visible portion of the line using midpoint subdivision?
Solution:
Page 63 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
A (-4, 2) B ""(-1, 6)
Page 64 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
The output of the algorithm is a list of polygon vertices all of which are on the visible
side of a clipping plane. Such each edge of the polygon is individually compared with the
clipping plane. This is achieved by processing two vertices of each edge of the polygon around
the clipping boundary or plane. This results in four possible relationships between the
edge and the clipping boundary or Plane. (See Fig. m).
Page 65 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
1. If the first vertex of the edge is outside the window boundary and the second vertex of
the edge is inside then the intersection point of the polygon edge with the window
boundary and the second vertex are added to the output vertex list (See Fig. m (a)).
2. If both vertices of the edge are inside the window boundary, only the second vertex is
added to the output vertex list. (See Fig. m (b)).
3. If the first vertex of the edge is inside the window boundary and the second vertex of
the edge is outside, only the edge intersection with the window boundary is added to
the output vertex list. (See Fig. m (c)).
4. If both vertices of the edge are outside the window boundary, nothing is added to the
output list. (See Fig. m (d)).
Once all vertices are processed for one clip window boundary, the output list of vertices
is clipped against the next window boundary. Going through above four cases we can realize
that there are two key processes in this algorithm.
Page 66 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
One way of determining the visibility of a point or vertex is described here. Consider
that two points A and B define the window boundary and point under consideration is V, then
these three points define a plane. Two vectors which lie in that plane are AB and AV. If this
plane is considered in the xy plane, then the vector cross product AV x AB has only a
component given by
The sign of the z component decides the position of Point V with respect to window
boundary.
If z is:
Positive - Point is on the right side of the window boundary.
Zero - Point is on the window boundary.
Negative - Point is on the left side of the window boundary.
Page 67 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Example:
For a polygon and clipping window shown in figure below give the list of vertices after each
boundary clipping.
Solution:
Original polygon vertices are V1, V2, V3, V4, and V5. After clipping each boundary the
new vertices are as shown in figure above.
Page 68 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
5. To remove these parts to create a more realistic image, we must apply a hidden line or
hidden surface algorithm to set of objects.
6. The algorithm operates on different kinds of scene models, generate various forms of
output or cater to images of different complexities.
7. All use some form of geometric sorting to distinguish visible parts of objects from those
that are hidden.
8. Just as alphabetical sorting is used to differentiate words near the beginning of the
alphabet from those near the ends.
9. Geometric sorting locates objects that lie near the observer and are therefore visible.
10. Hidden line and Hidden surface algorithms capitalize on various forms of coherence to
reduce the computing required to generate an image.
11. Different types of coherence are related to different forms of order or regularity in the
image.
12. Scan line coherence arises because the display of a scan line in a raster image is usually
very similar to the display of the preceding scan line.
13. Frame coherence in a sequence of images designed to show motion recognizes that
successive frames are very similar.
14. Object coherence results from relationships between different objects or between
separate parts of the same objects.
15. A hidden surface algorithm is generally designed to exploit one or more of these
coherence properties to increase efficiency.
16. Hidden surface algorithm bears a strong resemblance to two-dimensional scan
conversions.
These methods are also called a Visible Surface Determination. The implementation of
these methods on a computer requires a lot of processing time and processing power of the
computer.
The image space method requires more computations. Each object is defined clearly.
Visibility of each object surface is also determined.
Page 69 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
3.It is performed at the precision with which 3.It is performed using the resolution of the
each object is defined, No resolution is display device.
considered.
4.Calculations are not based on the resolution of 4.Calculations are resolution base, so the
the display so change of object can be easily change is difficult to adjust.
adjusted.
5.These were developed for vector graphics 5.These are developed for raster devices.
system.
7.Vector display is used for object method has 7.Raster systems is used for image space
large address space. methods have limited address space.
8.Object precision is used for application where 8.There are suitable for application
speed is required. where accuracy is required.
9.It requires a lot of calculations if theimage is 9. Image can be enlarged without losing
to enlarge. accuracy.
10.If the number of objects in the scene 10.In this method, complexity increase
increases, computation time also will increase. with the complexity of visible parts.
Page 70 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
1. Sorting:
All surfaces are sorted in two classes, i.e., visible and invisible. Pixels are colored
accordingly.
Different sorting algorithms are applied to different hidden surface algorithms. Sorting
of objects is done using x and y, z co-ordinates. Mostly z coordinate is used for sorting. The
efficiency of sorting algorithm affects the hidden surface removal algorithm. For sorting
complex scenes or hundreds of polygons complex sorts are used, i.e., quick sort, tree sort, radix
sort.
For simple objects selection, insertion, bubble sort is used.
2. Coherence
It is used to take advantage of the constant value of the surface of the scene. It is based
on how much regularity exists in the scene. When we moved from one polygon of one object to
another polygon of same object color and shearing will remain unchanged.
Types of Coherence
a) Edge coherence
b) Object coherence
c) Face coherence
d) Area coherence
e) Depth coherence
Page 71 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
a) Edge coherence:
The visibility of edge changes when it crosses another edge or it also penetrates a visible
edge.
b) Object coherence:
Each object is considered separate from others. In object, coherence comparison is
done using an object instead of edge or vertex. If A object is farther from object B, then there is
no need to compare edges and faces.
c) Face coherence:
In this faces or polygons which are generally small compared with the size of the image.
d) Area coherence:
It is used to group of pixels cover by same visible face.
e) Depth coherence:
Location of various polygons has separated a basis of depth. Depth of surface at one
point is calculated, the depth of points on rest of the surface can often be determined by a
simple difference equation.
g) Frame coherence:
It is used for animated objects. It is used when there is little change in image from one
frame to another.
Page 72 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
When the projection is taken, any projector ray from the center of projection through
viewing screen to object pieces object at two points, one is visible front surfaces, and another is
not visible back surface.
This algorithm acts a preprocessing step for another algorithm. The back face algorithm
can be represented geometrically. Each polygon has several vertices. All vertices are numbered
in clockwise. The normal M1 is generated a cross product of any two successive edge vectors.
M1represent vector perpendicular to face and point outward from polyhedron surface
N1=(v2-v1 )(v3-v2)
If N1.P≥0 visible
N1.P<0 invisible
Advantage
1. It is a simple and straight forward method.
2. It reduces the size of databases, because no need of store all surfaces in the database,
only the visible surface is stored.
Algorithm
For all pixels on the screen, set depth [x, y] to 1.0 and intensity [x, y] to a background
value.
For each polygon in the scene, find all pixels (x, y) that lie within the boundaries of a
polygon when projected onto the screen. For each of these pixels:
1. Calculate the depth z of the polygon at (x, y)
2. If z < depth [x, y], this polygon is closer to the observer than others already
recorded for this pixel. In this case, set depth [x, y] to z and intensity [x, y] to a
value corresponding to polygon's shading. If instead z > depth [x, y], the polygon
already recorded at (x, y) lies closer to the observer than does this new polygon,
and no action is taken.
Page 73 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
3. After all, polygons have been processed; the intensity array will contain the
solution.
4. The depth buffer algorithm illustrates several features common to all hidden
surface algorithms.
Page 74 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
UNIT – V
MULTIMEDIA
Multimedia is simply multiple forms of media integrated together. An example
of multimedia is a web page with an animation. Besides multiple types of media being
integrated with one another, multimedia can also stand for interactive types of media such
as video games, CD ROMs that teach a foreign language, or an information Kiosk at a subway
terminal. Other terms that are sometimes used for multimedia include hypermedia and rich
media. [1]
There are number of data types that can be characterized as multimedia data types.
These are typically the elements or the building blocks of or generalized multimedia
environments, platforms, or integrating tools. The basic types can be described as follows:
Text, Graphics , Audio, Animation, Video, Graphic Objects
Multimedia has become a huge force in human-being culture, industry and education.
Practically any type of information we receive can be categorized as multimedia, from
television, to magazines, to web pages, to movies, multimedia is a tremendous force in both
informing the public and entertaining us. Advertising is perhaps one of the biggest industry's
that use multimedia to send their message to the masses. Multimedia in Education has been
extremely effective in teaching individuals a wide range of subjects. The human brain learns
using many senses such as sight and hearing. While a lecture can be extremely informative, a
lecture that integrates pictures or video images can help an individual learn and retain
information much more effectively.
As technology progresses, so will multimedia. Today, there are plenty of new media
technologies being used to create the complete multimedia experience. For instance, virtual
reality integrates the sense of touch with video and audio media to immerse an individual into
a virtual world. Other media technologies being developed include the sense of smell that can
be transmitted via the Internet from one individual to another. Today's video games include bio
feedback.
Text :
The form in which the text can be stored can vary greatly. In addition to ASCII based
files, text is typically stored in processor files, spreadsheets, databases and more general
multimedia objects. With availability and proliferation of GUIs, text fonts the job of storing text
is becoming complex allowing special effects (color, shades..).
Graphics
There is great variance in the quality and size of storage (Image file formats) for still
images (Bitmap - gif, jpg, bmp) (Vector - svg, pdf, swf, ps). Digitalized images are sequence of
pixels that represents a region in the user's graphical display.
Audio
An increasingly popular data type (audio file format) being integrated in most of
applications is Audio. Its quite space intensive. One minute of sound can take up to 2-3 Mbs of
space. Several techniques are used to compress it in suitable format.
Animation
Page 75 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
It involves the appearance of motion caused by displaying still images one after another.
Often, animation is used for entertainment purposes. In addition to its use for entertainment,
animation is considered a form of art. It is often displayed and celebrated in film festivals
throughout the world. Also used for educational purposes.
Video
One on the most space consuming multimedia data type is digitalized video. The
digitalized videos are stored as sequence of frames. Depending upon its resolution and size a
single frame can consume upto 1 MB. Also to have realistic video playback, the transmission,
compression, and de compression of digitalized require continuous transfer rate.
Graphics (Objects)
These consist of special data structures used to define 2D & 3D shapes through which
we can define multimedia objects. These include various formats used by image, video editing
applications.
Many data streams are controlled using a packet-based system. The common 3G and 4G
wireless platforms, as well as Internet transmissions, are composed of these sets of data
packets that are handled in specific ways. For example, packets typically include headers that
identify the origin or intended recipient, along with other information that can make data
stream handling more effective.
Multimedia applications
Electronic messaging:
Sending audio and video as attachments via email. Downloading audio and video.
Sending simple text data through mails. It also provides store and forward message facility.
Image Enhancement:
Highlighting details of image by increasing contrast. Making picture darker and
increasing grey scale level of pixels. Rotating image in real time. Adjusting RGB to get image
with proper colors.
Document Imaging:
Storing, retrieving and manipulating large volumes of data i.e. documents. Complex
documents can be send in electronics form rather than on paper. Document image systems
uses workflow method.
Page 76 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Multimedia in Entertainment:
Now a days the live internet pay to play gaming with multiple players has become
popular. Actually the first application of multimedia system was in the field of entertainment
and that too in the video game industry. The integrated audio and video effects make various
types of games more entertaining. Generally most of the video games need joystick play.
Multimedia is mostly used in games. Text, audio, images and animations are mostly used in
computer games. The use of multimedia in games made possible to make innovative and
interactive games. It is also used in movies for entertainment, especially to develop special
effects in movies and animations. Multimedia application that allows users to actively
participate is called Interactive multimedia.
Multimedia in Advertising:
Multimedia technology is commonly used in advertisement. To promote the business
and products multimedia is used for preparing advertisement.
Multimedia in Business:
The business application of multimedia includes, product demos, instant messaging.
Multimedia is used in business for training employees using projectors, presenting sales,
educating customers etc. It helps for the promotion of business and new products. One the
excellent applications are voice and live conferencing. A multimedia can make a audience come
live.
Multimedia in software:
Software Engineers may use multimedia in computer from entertainment to designing
digital games; it can be used as a learning process. This multimedia software’s are created by
professionals and software engineers.
Multimedia Authoring
Page 77 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Definition
Multimedia authoring is a process of assembling different types of media contents like
text, audio, image, animations and video as a single stream of information with the help of
various software tools available in the market. Multimedia authoring tools give an integrated
environment for joining together the different elements of a multimedia production. It gives
the framework for organizing and editing the components of a multimedia project. It enables
the developer to create interactive presentation by combining text, audio, video, graphics and
animation.
Organizing Features:
The process of organization, design and production of multimedia involve navigation
diagrams or storyboarding and flowcharting. Some of the authoring tools provide a system of
visual flowcharting or overview facility to showcase your project's structure at a macro level.
Navigation diagrams help to organize a project. Many web-authoring programs like
Dreamweaver include tools that create helpful diagrams and links among the pages of a
website.
Interactivity Features:
Interactivity empowers the end users to control the content and flow of information of
the project. Authoring tools may provide one or more levels of interactivity.
Page 78 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Simple branching:
Offers the ability to go to another section of the multimedia production
Conditional branching:
Supports a go to base on the result of IF-THEN decision or events
Playback Features:
When you are developing multimedia project, you will continousally assembling
elements and testing to see how the assembly looks and performs. Therefore authoring system
should have playback facility.
Hypertext:
Hypertext capabilities can be used to link graphics, some animation and other text. The
help system of window is an example of hypertext. Such systems are very useful when a large
amount of textual information is to be represented or referenced.
Cross-Platform Capability:
Some authoring programs are available on several platforms and provide tools for
transforming and converting files and programs from one to the other.
Internet Playability:
Due to Web has become a significant delivery medium for multimedia, authoring
systems typically provide a means to convert their output so that it can be delivered within the
context of HTML or DHTML
MIDI
Musical Instrument Digital Interface (MIDI) is a technical protocol that governs the
interaction of digital instruments with computers and with each other. Instead of a direct
musical sound representation, MIDI provides the information on how a musical sound is made
with the help of MIDI commands. The protocol not only provides compactness but also
provides ease in manipulation and modification of notes, along with a flexible choice of
instruments.
MIDI contains information about the pitch, velocity, notation andcontrol signals for
different musical parameters such as vibration, volume,etc. It also contains information for an
Page 79 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
instrument to start and stop a specific note. This information is used by the wavetable of the
receiving musical device to produce the sound waves. As a result, MIDI is more concise than
similar technologies and is asynchronous. The byte is the basic unit of communication for the
protocol, which uses 8-bit serial transmission, with one start and one stop bit. Each MIDI
command has its own unique sequence of bytes.
One of the most common applications of MIDI is in sequencers, which allow a computer
to store, modify, record and play MIDI data. Sequencers use the MIDI format for files because
of their smaller size compared to those produced by other popular data formats. MIDI files,
however, can only be used with MIDI-compatible software or hardware.
Image compression
Image compression is the application of data compression on digital images. In effect,
the objective is to reduce redundancy of the image data in order to be able to store
or transmit data in an efficient form.
Chroma subsampling:
This takes advantage of the fact that the eye perceives brightness more sharply than
color, by dropping half or more of the chrominance information in the image.
Transform coding:
This is the most commonly used method. A Fourier-related transform such as DCT or
the wavelet transform are applied, followed by quantization and entropy coding.
Fractal compression:
The best image quality at a given bit-rate (or compression rate) is the main goal of
image compression. However, there are other important properties of image compression
schemes:
Scalability:
Page 80 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Resolution progressive:
First encode a lower image resolution; then encode the difference to higher resolutions.
Component progressive:
First encode grey; then color.
Meta information:
Compressed data can contain information about the image which can be used to
categorize, search or browse images. Such information can include color and texture statistics,
small preview images and author/copyright information.
Video Compression
Definition - What does video compression mean?
Video compression is the process of encoding a video file in such a way that it consumes
less space than the original file and is easier to transmit over the network/Internet.
It is a type of compression technique that reduces the size of video file formats by
eliminating redundant and non-functional data from the original video file.
Video compression is performed through a video codec that works on one or more
compression algorithms. Usually video compression is done by removing repetitive images,
sounds and/or scenes from a video. For example, a video may have the same background,
image or sound played several times or the data displayed/attached with video file is not that
important. Video compression will remove all such data to reduce the video file size.
Once a video is compressed, its original format is changed into a different format
(depending on the codec used). The video player must support that video format or be
integrated with the compressing codec to play the video file.
Page 81 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Compressing JPG format changes the usual structure of pixel graphics by combining 8 x
8 pixels into one block and converting them into a single layer. For example, a color conversion
between the RGB colorspace, YCbCrcolor model, and a low-pass filter (where high frequencies
are filtered out in order to reduce the file size). Depending on the chosen compression level,
this process is associated with a certain loss of quality since not all image information is
retained.
PNG format
PNG (Portable Network Graphics), a universally recognized graphic file format
developed by the World Wide Web Consortium (W3C), appeared for the first time in 1996. As a
patent-free and modern alternative to GIF (Graphic Interchange Format), it is characterized by
the possibility of lossless compression as well as a maximum color depth of up to 24 bits per
pixel (16.7 million colors) – or as many as 32 bits with alpha channel. In contrast to GIF,
however, animations can’t be generated with PNG.
The PNG format supports both transparency and semi-transparency (thanks to the
integrated alpha channel), which makes it suitable for all types of images, as well as interlacing,
allowing for an accelerated build-up of the image file during the loading process. The color and
brightness correction mechanisms ensure that PNG image files look the same on different
systems. In order to compress a graphic in PNG format, you can use tools such as the pngcrush.
Due to the loss-free compression process, the files are still comparatively large, which is why
the format is less suitable for displaying photographs than JPG, for example. It also offers the
possibility of reducing the color space (to 1 to 32 bits per pixel).
Recommended application scenario: storing and publishing small images and graphics
(logos, icons, bar charts, etc.), graphics with transparency, loss-free photos
GIF format
The online portal, CompuServe, introduced the Graphics Interchange Format, GIF for
short, in 1987 as a color alternative to X BitMap (XBM)’s black and white format. In contrast to
other solutions such as PCX or MacPaint, the GIF files needed significantly less space thanks to
the efficient LZW compression (data compression with the Lempel-Ziv-Welch algorithm), which
made the format very popular when the internet first took shape. As a format for photos and
graphics, JPG and PNG are now clearly ahead but since version GIF89a (1989), the format has
been able to combine several individual images in a single file, which is why it is still used to
create small animations.
All color information is stored in GIF in a table, the color palette. The table can contain
up to 256 colors (8 bit), which is why the image format is not suitable for displaying
photographs. The information can also be defined as transparent – however, unlike the more
modern PNG, partial transparency is not possible, meaning that a pixel can be either visible or
invisible.
Page 82 of 82
STUDY MATERIAL FOR
B.C.A Vth SEM.
COMPUTER GRAPHICS
Creating animations; clip art, logos, essentially things where a low color depth isn’t
problematic.
TIFF format
TIFF (Tagged Image File Format) is a graphic file format that is especially used for
transmitting print data and high-resolution images. It was developed as early on as 1986 by
Microsoft in cooperation with Aldus (now belongs to Adobe) and is specially optimized for
embedding color separation and color profiles (ICC profiles) of scanned images. Furthermore,
TIFF supports the CMYK color model and allows a color depth of up to 16 bits for each color
channel (the total color depth is 48 bits). Since 1992, the format has been able to be
compressed loss-free using LZW compressions, which is also used in GIF format.
Thanks to these features, TIFF has become the standard for images where quality plays
a more important role than file size. This is how publishers and print media work with the
image format. The archiving of monochrome graphics e.g. technical drawings, counts as one of
the most versatile applications. GeoTIFF was established with additional tags for saving and
presenting raster-based Geo-information (maps, aerial images, etc.)
Recommended application scenario: transferring high-quality images with high resolution for
printing
BMP format
BMP (Windows Bitmap) was developed for Microsoft and IBM operating systems and
was first released in 1990 with Windows 3.0 as a memory format for pixel graphics with a color
depth of up to 24 bits per pixel. The uncompressed image format assigns exactly one color
value to each pixel, which is why BMP files are very large by default. For this reason, the format
is not suitable for use on the web.
Page 83 of 82