Module 3
Module 3
MODULE -3
• Graphics programs use several kinds of input data, such as coordinate positions, attribute values,
character-string specifications, geometric-transformation values, viewing conditions, and
illumination parameters.
• Many graphics packages, including the International Standards Organization (ISO) and American
National Standards Institute (ANSI) standards, provide an extensive set of input functions for
processing such data.
• But input procedures require interaction with display-window managers and specific hardware
devices.
• Therefore, some graphics systems, particularly those that provide mainly device-independent
functions, often include relatively few interactive procedures for dealing with input data.
When input functions are classified according to data type, any device that is used to provide the specified
data is referred to as a logical input device for that data type. The standard logical input-data classifications
are:
1. STRING: A string device is a logical device that provides the ASCII values of input characters to the user
program. This logical device is usually implemented by means of physical keyboard.
2. LOCATOR: A locator device provides a position in world coordinates to the user program. It is usually
implemented by means of pointing devices such as mouse or track ball.
3. PICK: A pick device returns the identifier of an object on the display to the user program. It is usually
implemented with the same physical device as the locator but has a separate software interface to the user
program. In OpenGL, we can use a process of selection to accomplish picking.
4. CHOICE: A choice device allows the user to select one of a discrete number of options. In OpenGL, we
can use various widgets provided by the window system. A widget is a graphical interactive component
provided by the window system or a toolkit. The Widgets include menus, scrollbars and graphical buttons.
For example, a menu with n selections acts as a choice device, allowing user to select one of ‘n’ alternatives.
5. VALUATORS: They provide analog input to the user program on some graphical systems; there are
boxes or dials to provide value.
6. STROKE: A stroke device returns array of locations. Example, pushing down a mouse button starts the
transfer of data into specified array and releasing of button ends this transfer.
Input devices can provide input to an application program in terms of two entities:
2. Trigger of a device is a physical input on the device with which the user can send signal to the computer
Example 1: The measure of a keyboard is a single character or array of characters whereas the trigger is the
enter key.
Example 2: The measure of a mouse is the position of the cursor whereas the trigger is when the mouse
button is pressed.
The application program can obtain the measure and trigger in three distinct modes:
1. REQUEST MODE: In this mode, measure of the device is not returned to the program until the device is
triggered. For example, consider a typical C program which reads a character input using scanf(). When the
program needs the input, it halts when it encounters the scanf() statement and waits while user type
characters at the terminal. The data is placed in a keyboard buffer (measure) whose contents are returned to
the program only after enter key (trigger) is pressed.
Another example, consider a logical device such as locator, we can move out pointing device to the desired
location and then trigger the device with its button, the trigger will cause the location to be returned to the
application program.
2. SAMPLE MODE: In this mode, input is immediate. As soon as the function call in the user program is
executed, the measure is returned. Hence no trigger is needed. Both request and sample modes are useful for
the situation if and only if there is a single input device from which the input is to be taken. However, in case
of flight simulators or computer games variety of input devices are used and this mode cannot be used. Thus,
event mode is used.
3. EVENT MODE: This mode can handle the multiple interactions. Suppose that we are in an environment
with multiple input devices, each with its own trigger and each running a measure process. Whenever a
device is triggered, an event is generated. The device measure including the identifier for the device is
placed in an event queue. If the queue is empty, then the application program will wait until an event occurs.
If there is an event in a queue, the program can look at the first event type and then decide what to do.
Echo Feedback
Requests can usually be made in an interactive input program for an echo of input data and associated
parameters. When an echo of the input data is requested, it is displayed within a specified screen area. Echo
feedback can include, for example, the size of the pick window, the minimum pick distance, the type and
size of a cursor, the type of highlighting to be employed during pick operations, the range (mininum and
maximum) for valuator input, and the resolution (scale) for valuator input.
Callback Functions
For device-independent graphics packages, a limited set of input functions can be provided in an auxiliary
library. Input procedures can then be handled as callback Interactive Input Methods and Graphical User
Interfaces functions that interact with the system software. These functions specify what actions are to be
taken by a program when an input event occurs. Typical input events are moving a mouse, pressing a mouse
button, or pressing a key on the keyboard.
An interaction technique is the fusion of input and output, consisting of all software and hardware elements,
that provides a way for the user to accomplish a task“.
(1) Constraints: - A constraint is a rule for altering input coordinates values to produce a specified
orientation or alignment of the displayed coordinates. The most common constraint is a horizontal or vertical
alignment of straight lines.
Horizontal line constraint. Construction of a line segment with endpoints constrained to grid
intersection positions.
(2) Grids: - Another kind of constraint is a grid of rectangular lines displayed in some part of the screen
area. When a grid is used, any input coordinate position is rounded to the nearest intersection of two grid
lines.
(3) Gravity field: - When it is needed to connect lines at positions between endpoints, the graphics packages
convert any input position near a line to a position on the line. The conversion is accomplished by creating a
gravity area around the line. Any related position within the gravity field of line is moved to the nearest
position on the line. It illustrated with a shaded boundary around the line.
(4) Rubber Band Methods: - Straight lines can be constructed and positioned using rubber band methods
which stretch out a line from a starting position as the screen cursor.
(5) Dragging: - These methods move object into position by dragging them with the screen cursor.
(6) Painting and Drawing: - Cursor drawing options can be provided using standard curve shapes such as
circular arcs and splices, or with freehand sketching procedures. Line widths, line styles and other attribute
options are also commonly found in painting and drawing packages.
5 Virtual-Reality Environments
Interactive Another method for generating virtual scenes is to display stereographic projections on a raster
monitor, with the two stereographic views displayed on alternate refresh cycles. The scene is then viewed
through stereographic glasses. Interactive object manipulations can again be accomplished with a data glove
and a tracking device to monitor the glove position and orientation relative to the position of objects in the
scene. Another method for generating virtual scenes is to display stereographic projections on a raster
monitor, with the two stereographic views displayed on alternate refresh cycles. The scene is then viewed
through stereographic glasses. Interactive object manipulations can again be accomplished with a data glove
and a tracking device to monitor the glove position and orientation relative to the position of objects in the
scene.
Interactive device input in an OpenGL program is handled with routines in the OpenGL Utility Toolkit
(GLUT), because these routines need to interface with a window system. In GLUT, we have functions to
accept input from standard devices, such as a mouse or a keyboard, as well as from tablets, space balls,
button boxes, and dials. For each device, we specify a procedure (the call back function) that is to be invoked
when an input event from that device occurs. These GLUT commands are placed in the main procedure
along with the other GLUT statements. In addition, a combination of functions from the basic library and the
GLU library can be used with the GLUT mouse function for pick input.
We use the following function to specify (“register”) a procedure that is to be called when the mouse pointer
is in a display window and a mouse button is pressed or released:
glutMouseFunc (mouseFcn);
This mouse callback procedure, which we named mouseFcn, has four arguments:
void mouseFcn (GLint button, GLint action, GLint xMouse, GLint yMouse)
Parameter button is assigned a GLUT symbolic constant that denotes one of the three mouse buttons, and
parameter action is assigned a symbolic constant that specifies which button action we want to use to trigger
the mouse activation event. Allowable values for button are GLUT LEFT BUTTON, GLUT MIDDLE
BUTTON, and GLUT RIGHT BUTTON. (If we have only a two-button mouse, then we use just the left-
button and right-button designations; with a one-button mouse, we can assign parameter button only the
value GLUT LEFT BUTTON.) Parameter action can be assigned either GLUT DOWN or GLUT UP,
depending on whether we want to initiate an action when we press a mouse button or when we release it.
When procedure mouseFcn is invoked, the display-window location of the mouse cursor is returned as the
coordinate position (xMouse, yMouse). This location is relative to the top-left corner of the display window,
so that xMouse is the pixel distance from the left edge of the display window and yMouse is the pixel
distance down from the top of the display window.
glutMotionFunc (fcnDoSomething);
This routine invokes fcnDoSomething when the mouse is moved within the display window with one or
more buttons activated.
where (xMouse, yMouse) is the mouse location in the display window relative to the top-left corner, when
the mouse is moved with a button pressed. Similarly, we can perform some action when we move the mouse
within the display window without pressing a button:
glutPassiveMotionFunc (fcnDoSomethingElse);
Again, the mouse location is returned to fcnDoSomethingElse as coordinate position (xMouse, yMouse),
relative to the top-left corner of the display window.
With keyboard input, we use the following function to specify a procedure that is to be invoked when a key
is pressed:
glutKeyboardFunc (keyFcn);
Parameter key is assigned a character value or the corresponding ASCII code. The display-window mouse
location is returned as position (xMouse, yMouse) relative to the top-left corner of the display window.
When a designated key is pressed, we can use the mouse location to initiate some action, independently of
whether any mouse buttons are pressed.
For function keys, arrow keys, and other special-purpose keys, we can use the command
glutSpecialFunc (specialKeyFcn);
but now parameter specialKey is assigned an integer-valued GLUT symbolic constant. To select a function
key, we use one of the constants GLUT KEY F1 through GLUT KEY F12. For the arrow keys, we use
constants such as GLUT KEY UP and GLUT KEY RIGHT. Other keys can be designated using GLUT KEY
PAGE DOWN, GLUT KEY HOME, and similar constants for the page up, end, and insert keys.
Usually, tablet activation occurs only when the mouse cursor is in the display window. A button event for
tablet input is then recorded with
glutTabletButtonFunc (tabletFcn);
and the arguments for the invoked function are similar to those for a mouse:
void tabletFcn (GLint tabletButton, GLint action, GLint xTablet, GLint yTablet)
We designate a tablet button with an integer identifier such as 1, 2, 3, and so on, and the button action is
again specified with either GLUT UP or GLUT DOWN. The returned values xTablet and yTablet are the
tablet coordinates. We can determine the number of available tablet buttons with the command
glutDeviceGet (GLUT_NUM_TABLET_BUTTONS);
Motion of a tablet stylus or cursor is processed with the following function: glutTabletMotionFunc
(tabletMotionFcn); where the invoked function has the form void tabletMotionFcn (GLint xTablet, GLint
yTablet) The returned values xTablet and yTablet give the coordinates on the tablet surface.
We use the following function to specify an operation when a spaceball button is activated for a selected
display window:
glutSpaceballButtonFunc (spaceballFcn);
Spaceball buttons are identified with the same integer values as a tablet, and parameter action is assigned
either the value GLUT UP or the value GLUT DOWN. We can determine the number of available spaceball
buttons with a call to glutDeviceGet using the argument GLUT NUM SPACEBALL BUTTONS.
Translational motion of a spaceball, when the mouse is in the display window, is recorded with the function
call glutSpaceballMotionFunc (spaceballTranlFcn);
The three-dimensional translation distances are passed to the invoked function as, for example:
These translation distances are normalized within the range from −1000 to 1000. Similarly, a spaceball
rotation is recorded with glutSpaceballRotateFunc (spaceballRotFcn);
The three-dimensional rotation angles are then available to the callback function, as follows:
glutButtonBoxFunc (buttonBoxFcn);
The buttons are identified with integer values, and the button action is specified as GLUT UP or GLUT
DOWN.
glutDialsFunc (dialsFcn);
In this case, we use the callback function to identify the dial and obtain the angular amount of rotation:
Dials are designated with integer values, and the dial rotation is returned as an integer degree value.
Picking is the logical input operation that allows the user to identify an object on the display. The action of
picking uses pointing device but the information returned to the application program is the identifier of an
object not a position. It is difficult to implement picking in modern system because of graphics pipeline
architecture. Therefore, converting from location on the display to the corresponding primitive is not direct
calculation.
Selection:
It involves adjusting the clipping region and viewport such that we can keep track of which primitives lies
in a small clipping region and are rendered into region near the cursor.
These primitives are sent into a hit list that can be examined later by the user program.
When we use double buffering it has two color buffers: front and back buffers. The contents present in the
front buffer is displayed, whereas contents in back buffer is not displayed so we can use back buffer for other
than rendering the scene.
Picking can be performed in four steps that are initiated by user defined pick function in the application:
• We draw the objects into back buffer with the pick colors.
• We get the position of the mouse using the mouse callback.
• Use glReadPixels() to find the color at the position in the frame buffer corresponding to the mouse
position.
• We search table of colors to find the object corresponds to the color read.
The difficulty in implementing the picking is we cannot go backward directly from the position of the mouse
to the primitives. OpenGL provides “selection mode” to do picking.
The glRenderMode() is used to choose select mode by passing GL_SELECT value. When we enter selection
mode and render a scene, each primitive within the clipping volume generates a message called “hit” that is
stored in a buffer called “name stack”.
• void glSelectBuffer(GLsizei n, GLuint *buff) : specifies array buffer of size ‘n’ in which to place
selection data. void glInitNames() : initializes the name stack.
• void glPushName(GLuint name) : pushes name on the name stack.
• void glPopName() : pops the top name from the name stack.
• void glLoadName(GLuint name) : replaces the top of the name stack with name.
• OpenGL allow us to set clipping volume for picking using gluPickMatrix() which is applied before
gluOrtho2D. gluPickMatrix(x,y,w,h,*vp) : creates a projection matrix for picking that restricts
drawing to a w x h area and centered at (x,y) in window coordinates within the viewport vp.
• (a) There is a normal window and image on the display. We also see the cursor with small box around
it indicating the area in which primitive is rendered.
• (b) It shows window and display after the window has been changed by gluPickMatrix.
#include <GL/glut.h>
#include <stdio.h>
const GLint pickBuffSize = 32;
/* Set initial display-window size. */
GLsizei winWidth = 400, winHeight = 400;
void init (void)
{
/* Set display-window color to white. */
glClearColor (1.0, 1.0, 1.0, 1.0);
}
/* Define 3 rectangles and associated IDs. */
void rects (GLenum mode)
{
if (mode == GL_SELECT)
glPushName (30); // Red rectangle.
glColor3f (1.0, 0.0, 0.0);
glRecti (40, 130, 150, 260);
if (mode == GL_SELECT)
glPushName (10); // Blue rectangle.
glColor3f (0.0, 0.0, 1.0);
glRecti (150, 130, 260, 260);
if (mode == GL_SELECT)
glPushName (20); // Green rectangle.
glColor3f (0.0, 1.0, 0.0);
glRecti (40, 40, 260, 130);
}
/* Print the contents of the pick buffer for each mouse selection. */
void processPicks (GLint nPicks, GLuint pickBuffer [ ])
{
GLint j, k;
GLuint objID, *ptr;
printf (" Number of objects picked = %d\n", nPicks);
printf ("\n");
ptr = pickBuffer;
/* Output all items in each pick record. */
for (j = 0; j < nPicks; j++) {
objID = *ptr;
printf (" Stack position = %d\n", objID);
ptr++;
printf (" Min depth = %g,", float (*ptr/0x7fffffff));
ptr++;
glPopMatrix ( );
glFlush ( );
/* Determine the number of picked objects and return to the
* normal rendering mode.
*/
nPicks = glRenderMode (GL_RENDER);
processPicks (nPicks, pickBuffer); // Process picked objects.
glutPostRedisplay ( );
}
void displayFcn (void)
{
glClear (GL_COLOR_BUFFER_BIT);
rects (GL_RENDER); // Display the rectangles.
glFlush ( );
}
void winReshapeFcn (GLint newWidth, GLint newHeight)
{
/* Reset viewport and projection parameters. */
glViewport (0, 0, newWidth, newHeight);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ( );
gluOrtho2D (0.0, 300.0, 0.0, 300.0);
glMatrixMode (GL_MODELVIEW);
/* Reset display-window size parameters. */
winWidth = newWidth;
winHeight = newHeight;
}
void main (int argc, char** argv)
{
glutInit (&argc, argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowPosition (100, 100);
glutInitWindowSize (winWidth, winHeight);
glutCreateWindow ("Example Pick Program");
init ( );
glutDisplayFunc (displayFcn);
glutReshapeFunc (winReshapeFcn);
glutMouseFunc (pickRects);
glutMainLoop ( );
}
• GLUT also supports the creation of hierarchical menus which is given below:
A common feature of modern applications software is a graphical user interface (GUI) composed of
display windows, icons, menus, and other features to aid a user in applying the software to a
particular problem. Other considerations for a user interface (whether graphical or not) are the
accommodation of various skill levels, consistency, error handling, and feedback.
Consistency
An important design consideration in an interface is consistency. An icon shape should always have
a single meaning, rather than serving to represent different actions or objects depending on the
context. Some other examples of consistency are always placing menus in the same relative positions
so that a user does not have to hunt for a particular option, always using the same combination of
keyboard keys for an action, and always using the same color encoding so that a color does not have
different meanings in different situations.
Minimizing Memorization
Operations in an interface should also be structured so that they are easy to understand and to
remember. Obscure, complicated, inconsistent, and abbreviated command formats lead to confusion
and reduction in the effective application of the software. Icons and window systems can also be
organized to minimize memorization. Different kinds of information can be separated into different
windows so that a user can identify and select items easily. Icons should be designed as easily
recognizable shapes that are related to application objects and actions.
Feedback
Responding to user actions is another important feature of an interface, particularly for an
inexperienced user. As each action is entered, some response should be given. Otherwise, a user
might begin to wonder what the system is doing and whether the input should be re-entered.
Feedback can be given in many forms, such as highlighting an object, displaying an icon or message,
and displaying a selected menu option in a different color. When the processing of a requested action
is lengthy, the display of a flashing message, clock, hourglass, or other progress indicator is
important.
Computer Animation
Computer-graphics methods are now commonly used to produce animations for a variety of
applications, including entertainment (motion pictures and cartoons), advertising, scientific and
engineering studies, and training and education. Although we tend to think of animation as implying
object motion, the term computer animation generally refers to any time sequence of visual changes
in a picture. In addition to changing object positions using translations or rotations, a computer-
generated animation could display time variations in object size, color, transparency, or surface
texture. Advertising animations often transition one object shape into another: for example,
transforming a can of motor oil into an automobile engine. We can also generate computer
animations by varying camera parameters, such as position, orientation, or focal length, and
variations in lighting effects or other parameters and procedures associated with illumination and
rendering can be used to produce computer animations.
Double Buffering
• One method for producing a real-time animation with a raster system is to employ two refresh
buffers. Initially, we create a frame for the animation in one of the buffers.
• Then, while the screen is being refreshed from that buffer, we construct the next frame in the other
buffer. When that frame is complete, we switch the roles of the two buffers so that the refresh
routines use the second buffer during the process of creating the next frame in the first buffer. This
alternating buffer process continues throughout the animation.
• Graphics libraries that permit such operations typically have one function for activating the double
buffering routines and another function for interchanging the roles of the two buffers.
• The most straight forward implementation is to switch the two buffers at the end of the current
refresh cycle, during the vertical retrace of the electron beam.
• If a program can complete the construction of a frame within the time of a refresh cycle, say 1/60 of a
second, each motion sequence is displayed in synchronization with the screen refresh rate.
• However, if the time to construct a frame is longer than the refresh time, the current frame is
displayed for two or more refresh cycles while the next animation frame is being generated.
• For example, if the screen refresh rate is 60 frames per second and it takes 1/50 of a second to
construct an animation frame, each frame is displayed on the screen twice and the animation rate is
only 30 frames each second.
• Similarly, if the frame construction time is 1/25 of a second, the animation frame rate is reduced to 20
frames per second because each frame is displayed three times.
• Irregular animation frame rates can occur with double buffering when the frame construction time is
very nearly equal to an integer multiple of the screen refresh time.
• As an example of this, if the screen refresh rate is 60 frames per second, then an erratic animation
frame rate is possible when the frame construction time is very close to 1/60 of a second, or 2/60 of a
second, or 3/60 of a second, and so forth.
• Because of slight variations in the implementation time for the routines that generate the primitives
and their attributes, some frames could take a little more time to construct and some a little less time.
• Thus, the animation frame rate can change abruptly and erratically.
• One way to compensate for this effect is to add a small-time delay to the program.
• Another possibility is to alter the motion or scene description to shorten the frame construction time
• We can also generate real-time raster animations for limited applications using block transfers of a
rectangular array of pixel values.This animation technique is often used in game-playing programs.
• A simple method for translating an object from one location to another in the xy plane is to transfer
the group of pixel values that define the shape of the object to the new location.
• Two-dimensional rotations in multiples of 90° are also simple to perform. Sequences of raster
operations can be executed to produce realtime animation for either two-dimensional or three-
dimensional objects.
• We can also animate objects along two-dimensional motion paths using colortable transformations.
• Here we predefine the object at successive positions along the motion path and set the successive
blocks of pixel values to color-table entries.
• The pixels at the first position of the object are set to a foreground color, and the pixels at the other
object positions are set to the background color.
• The animation is then accomplished by changing the color-table values so that the object color at
successive positions along the animation path becomes the foreground color as the preceding position
is set to the background color.
a list of the basic ideas for the action. Originally, the set of motion sketches was attached to a large
board that was used to present an overall view of the animation project. Hence, the name
“storyboard.”
• An object definition is given for each participant in the action. Objects can be defined in terms of
basic shapes, such as polygons or spline surfaces. In addition, a description is often given of the
movements that are to be performed by each character or object in the story.
• A key frame is a detailed drawing of the scene at a certain time in the animation sequence. Within
each key frame, each object (or character) is positioned according to the time for that frame. Some
key frames are chosen at extreme positions in the action; others are spaced so that the time interval
between key frames is not too great. More key frames are specified for intricate motions than for
simple, slowly varying motions. Development of the key frames is generally the responsibility of the
senior animators, and often a separate animator is assigned to each character in the animation.
• In-betweens are the intermediate frames between the key frames. • The total number of frames, and
hence the total number of in-betweens, needed for an animation is determined by the display media
that is to be used. • Film requires 24 frames per second, and graphics terminals are refreshed at the
rate of 60 or more frames per second. • Typically, time intervals for the motion are set up so that there
are from three to five in-betweens for each pair of key frames. • Depending on the speed specified for
the motion, some key frames could be duplicated. • As an example, a 1-minute film sequence with no
duplication requires a total of 1,440 frames. If five in-betweens are required for each pair of key
frames, then 288 key frames would need to be developed. There are several other tasks that may be
required, depending on the application. These additional tasks include motion verification, editing,
and the production and synchronization of a soundtrack. Many of the functions needed to produce
general animations are now computer-generated.
Traditional Animation Techniques
• Film animators use a variety of methods for depicting and emphasizing motion sequences. These
include object deformations, spacing between animation frames, motion anticipation and follow-
through, and action focusing.
• One of the most important techniques for simulating acceleration effects, particularly for nonrigid
objects, is squash and stretch. Figure 4 shows how this technique is used to emphasize the
acceleration and deceleration of a bouncing ball.
• As the ball accelerates, it begins to stretch. When the ball hits the floor and stops, it is first
compressed (squashed) and then stretched again as it accelerates and bounces upwards. Another
technique used by film animators is timing, which refers to the spacing between motion frames.
• A slower moving object is represented with more closely spaced frames, and a faster moving object
is displayed with fewer frames over the path of the motion. This effect is illustrated in Figure 5,
where the position changes between frames increase as a bouncing ball moves faster. Object
movements can also be emphasized by creating preliminary actions that indicate an anticipation of a
coming motion. For example, a cartoon character. might lean forward and rotate its body before
starting to run; or a character might perform a “windup” before throwing a ball. Similarly, follow-
through actions can be used to emphasize a previous motion. After throwing a ball, a character can
continue the arm swing back to its body; or a hat can fly off a character that is stopped abruptly. An
action also can be emphasized with staging, which refers to any method for focusing on an important
part of a scene, such as a character hiding something.
Many software packages have been developed either for general animation design or for performing
specialized animation tasks. Typical animation functions include managing object motions, generating views
of objects, producing camera motions, and the generation of in-between frames. Some animation packages,
such as Wavefront for example, provide special functions for both the overall animation design and the
processing of individual objects. Others are special-purpose packages for particular features of an animation,
such as a system for generating in-between frames or a system for figure animation. A set of routines is often
provided in a general animation package for storing and managing the object database. Object shapes and
associated parameters are stored and updated in the database. Other object functions include those for
generating the object motion and those for rendering the object surfaces.
22 Prepared By: Ms.Megha Rani R
Computer Graphics & Fundamentals of Image Processing 21AI644
Computer-Animation Languages
• We can develop routines to design and control animation sequences within a general-purpose
programming language, such as C, C++, Lisp, or Fortran, but several specialized animation
languages have been developed.
• These languages typically include a graphics editor, a keyframe generator, an in-between generator,
and standard graphics routines.
• The graphics editor allows an animator to design and modify object shapes, using spline surfaces,
constructive solid geometry methods, or other representation schemes.
• An important task in an animation specification is scene description. This includes the positioning of
objects and light sources, defining the photometric parameters (light-source intensities and surface
illumination properties), and setting the camera parameters (position, orientation, and lens
characteristics).
• Another standard function is action specification, which involves the layout of motion paths for the
objects and camera
• Key-frame systems were originally designed as a separate set of animation routines for generating the
in-betweens from the user-specified key frames.
• Parameterized systems allow object motion characteristics to be specified as part of the object
definitions. The adjustable parameters control such object characteristics as degrees of freedom,
motion limitations, and allowable shape changes. Scripting systems allow object specifications and
animation sequences to be defined with a user-input script. From the script, a library of various
objects and motions can be constructed.
Key-Frame Systems
A set of in-betweens can be generated from the specification of two (or more) key frames using a key-frame
system. Motion paths can be given with a kinematic description as a set of spline curves, or the motions can
be physically based by specifying the forces acting on the objects to be animated. For complex scenes, we
can separate the frames into individual components or objects called cels (celluloid transparencies). This
term developed from cartoon animation techniques where the background and each character in a scene were
placed on a separate transparency. Then, with the transparencies stacked in the order from background to
foreground, they were photographed to obtain the completed frame. The specified animation paths are then
used to obtain the next cel for each character, where the positions are interpolated from the key-frame times.
With complex object transformations, the shapes of objects may change over time. Examples are clothes,
facial features, magnified detail, evolving shapes, and exploding or disintegrating objects. For surfaces
described with polygon meshes, these changes can result in significant changes in polygon shape such that
the number of edges in a polygon could be different from one frame to the next.
Morphing
Transformation of object shapes from one form to another is termed morphing, which is a shortened form of
“metamorphosing.” An animator can model morphing by transitioning polygon shapes through the in-
betweens from one key frame to the next.
Given two key frames, each with a different number of line segments specifying an object transformation,
we can first adjust the object specification in one of the frames so that the number of polygon edges (or the
number of polygon vertices) is the same for the two frames. This preprocessing step is illustrated in Figure 8.
A straight-line segment in key frame k is transformed into two-line segments in key frame k + 1. Because
key frame k + 1 has an extra vertex, we add a vertex between vertices 1 and 2 in key frame k to balance the
number of vertices (and edges) in the two key frames. Using linear interpolation to generate the in-betweens,
we transition the added vertex in key frame k into vertex 3 along the straight-line path shown in Figure 9. An
example of a triangle linearly expanding into a quadrilateral is given in Figure 10. We can state general
preprocessing rules for equalizing key frames in terms of either the number of edges or the number of
vertices to be added to a key frame. We first consider equalizing the edge count, where parameters Lk and
Lk+1 denote the number of line segments in two consecutive frames.
Simulating Accelerations
Curve-fitting techniques are often used to specify the animation paths between key frames. Given the vertex
positions at the key frames, we can fit the positions with linear or nonlinear paths. Figure 11 illustrates a
nonlinear fit of key frame positions. To simulate accelerations, we can adjust the time spacing for the in-
betweens. If the motion is to occur at constant speed (zero acceleration), we use equal interval time spacing for
the in-betweens. For instance, with n in-betweens.
key-frame times of t1 and t2 (Figure 12), the time interval between the key frames is divided into n+1 equal
and this time value is used to calculate coordinate positions, color, and other physical parameters for that frame
of the motion. Speed changes (nonzero accelerations) are usually necessary at some point in an animation film
or cartoon, particularly at the beginning and end of a motion sequence. The startup and slowdown portions of an
animation path are often modelled with spline or trigonometric functions, but parabolic and cubic time functions
have been applied to acceleration modelling. Animation packages commonly furnish trigonometric functions for
simulating accelerations.
Motion Specifications
General methods for describing an animation sequence range from an explicit specification of the motion paths
to a description of the interactions that produce the motions.
The most straightforward method for defining an animation is direct motion specification of the geometric-
transformation parameters. Here, we explicitly set the values for the rotation angles and translation vectors.
Then the geometric transformation matrices are applied to transform coordinate positions. Alternatively, we
could use an approximating equation involving these parameters to specify certain kinds of motions. We can
approximate the path of a bouncing ball, for instance, with a damped, rectified, sine curve (Figure 16):
where A is the initial amplitude (height of the ball above the ground), ω is the angular frequency, θ0 is the phase
angle, and k is the damping constant. This method for motion specification is particularly useful for simple user
programmed animation sequences.
Goal-Directed Systems
At the opposite extreme, we can specify the motions that are to take place in general terms that abstractly
describe the actions in terms of the final results. In other words, an animation is specified in terms of the final
state of the movements. These systems are referred to as goal-directed, since values for the motion parameters
are determined from the goals of the animation. For example, we want an object to “walk” or to “run” to a
particular destination; or we could state that we want an object to “pick up” some other specified object. The
input directives are then interpreted in terms of component motions that will accomplish the described task.
Human motions, for instance, can be defined as a hierarchical structure of sub motions for the torso, limbs, and
so forth. Thus, when a goal, such as “walk to the door” is given, the movements required of the torso and limbs
to accomplish this action are calculated.
We can also construct animation sequences using kinematic or dynamic descriptions. With a kinematic
description, we specify the animation by giving motion parameters (position, velocity, and acceleration) without
reference to causes or goals of the motion. For constant velocity (zero acceleration), we. Kinematic
specification of a motion can also be given by simply describing the motion path. This is often accomplished
using spline curves. An alternate approach is to use inverse kinematics. Here, we specify the initial and final
positions of objects at specified times and the motion parameters are computed by the system. For example,
assuming zero acceleration, we can determine the constant velocity that will accomplish the movement of an
object from the initial position to the final position designate the motions of rigid bodies in a scene by giving an
initial position and velocity vector for each object.
Dynamic descriptions, on the other hand, require the specification of the forces that produce the velocities and
accelerations. The description of object behavior in terms of the influence of forces is generally referred to as
physically based modeling. Examples of forces affecting object motion include electromagnetic, gravitational,
frictional, and other mechanical forces
Character Animation
Animation of simple objects is relatively straightforward. When we consider the animation of more complex
figures such as humans or animals, however, it becomes much more difficult to create realistic animation.
Consider the animation of walking or running human (or humanoid) characters. Based upon observations in
their own lives of walking or running people, viewers will expect to see animated characters move in particular
ways. If an animated character’s movement doesn’t match this expectation, the believability of the character
may suffer. Thus, much of the work involved in character animation is focused on creating believable
movements.
A basic technique for animating people, animals, insects, and other critters is to model them as articulated
figures, which are hierarchical structures composed of a set of rigid links that are connected at rotary joints
(Figure 17). In less formal terms, this just means that we model animate objects as moving stick figures, or
simplified skeletons, that can later be wrapped with surfaces representing skin, hair, fur, feathers, clothes, or
other outer coverings. The connecting points, or hinges, for an articulated figure are placed at the shoulders,
hips, knees, and other skeletal joints, which travel along specified motion paths as the body moves. For
example, when a motion is specified for an object, the shoulder automatically moves in a certain way and, as the
shoulder moves, the arms move. Different types of movement, such as walking, running, or jumping, are
defined and associated with particular motions for the joints and connecting links. A series of walking leg
motions, for instance, might be defined as in Figure 18. The hip joint is translated forward along a horizontal
line, while the connecting links perform a series of movements about the hip, knee, and angle joints. Starting
with a straight leg [Figure 18(a)], the first motion is a knee bend as the hip moves forward [Figure 18(b)]. Then
the leg swings forward, returns to the vertical position, and swings back, as shown in Figures 18(c), (d), and (e).
The final motions are a wide swing back and a return to the straight vertical position, as in Figures 18(f) and (g).
This motion cycle is repeated for the duration of the animation as the figure moves over a specified distance or
time interval. As a figure moves, other movements are incorporated into the various joints. A sinusoidal motion,
often with varying amplitude, can be applied to the hips so that they move about on the torso. Similarly, a
rolling or rocking motion can be imparted to the shoulders, and the head can bob up and down. Both kinematic-
motion descriptions and inverse kinematics are used in figure animations. Specifying the joint motions is
generally an easier task, but inverse kinematics can be useful for producing simple motion over arbitrary terrain.
Motion Capture
An alternative to determining the motion of a character computationally is to digitally record the movement of a
live actor and to base the movement of an animated character on that information. This technique, known as
motion capture or mo-cap, can be used when the movement of the character is predetermined (as in a scripted
scene). The animated character will perform the same series of movements as the live actor. The classic motion
capture technique involves placing a set of markers at strategic positions on the actor’s body, such as the arms,
legs, hands, feet, and joints. It is possible to place the markers directly on the actor, but more commonly they
are affixed to a special skintight body suit worn by the actor. The actor is them filmed performing the scene.
Image processing techniques are then used to identify the positions of the markers in each frame of the film, and
their positions are translated to coordinates. These coordinates are used to determine the positioning of the body
of the animated character. The movement of each marker from frame to frame in the film is tracked and used to
control the corresponding movement of the animated character. To accurately determine the positions of the
markers, the scene must be filmed by multiple cameras placed at fixed positions. The digitized marker data from
each recording can then be used to triangulate the position of each marker in three dimensions.
Periodic Motions
When we construct an animation with repeated motion patterns, such as a rotating object, we need to be sure
that the motion is sampled frequently enough to represent the movements correctly. In other words, the motion
must be synchronized with the frame-generation rate so that we display enough frames per cycle to show the
true motion. Otherwise, the animation may be displayed incorrectly. A typical example of an under sampled
periodic-motion display is the wagon wheel in a Western movie that appears to be turning in the wrong
direction. Figure 19 illustrates one complete cycle in the rotation of a wagon wheel with one red spoke that
makes 18 clockwise revolutions per second. If this motion is recorded on film at the standard motion-picture
projection rate of 24 frames per second, then the first five frames depicting this motion would be as shown in
Figure 20. Because the wheel completes 3/4 of a turn every 1/24 of a second, only one animation frame is
generated per cycle, and the wheel thus appears to be rotating in the opposite (counterclockwise) direction.
object so that multiple frames are generated in each revolution. Another factor that we need to consider in the
display of a repeated motion is the effect of round-off in the calculations for the motion parameters.
• Double-buffering operations, if available, are activated using the following GLUT command:
glutInitDisplayMode (GLUT_DOUBLE);
• This provides two buffers, called the front buffer and the back buffer, that we can use alternately to
refresh the screen display.
• While one buffer is acting as the refresh buffer for the current display window, the next frame of an
animation can be constructed in the other buffer.
• We specify when the roles of the two buffers are to be interchanged using
glutSwapBuffers ( );
To determine whether double-buffer operations are available on a system, we can issue the following
query:
glGetBooleanv (GL_DOUBLEBUFFER, status);
A value of GL_TRUE is returned to array parameter status if both front and back buffers are available
on a system. Otherwise, the returned value is GL_FALSE.
• For a continuous animation, we can also use
glutIdleFunc (animationFcn);
- where parameter animationFcn can be assigned the name of a procedure that is to perform the
operations for incrementing the animation parameters.
- This procedure is continuously executed whenever there are no display-window events that must be
processed.
- To disable the glutIdleFunc, we set its argument to the value NULL or the value 0.
******************************
End of Module -3