KEMBAR78
3 d display-methods-in-computer-graphics(For DIU) | PPTX
3D display methods in
computer graphics?
SUBMITTED BY :
ARAFAT AHMED TANZEER : 162-15-7895
What is 3d display methods in computer
graphics?
3D computer graphics (in contrast to 2D
computer graphics) are graphics that utilize a
three dimensional representation of geometric
data that is stored in the computer for the
purposes of performing calculations and
rendering 2D images. Such images may be for
later display or for real-time viewing.
What we are going to talk about :
•Parallel Projection.
•Perspective Projection.
•Depth Cueing
Parallel Projection:
A parallel projection is a projection of an object in three-dimensional space onto a fixed plane,
known as the projection plane or image plane, where the rays, known as lines of sight or projection
lines, are parallel to each other.
In parallel projection, z co-ordinate is discarded
and parallel, lines from each vertex on the object
are extended until they intersect the view plane.
We connect the projected vertices by line
segments which correspond to connections on the
original object. As shown in next slide a parallel
projection preserves relative proportions of objects
but does not produce the realistic views.
• Project points on the object surface along parallel lines onto the display plane.
• Parallel lines are still parallel after projection.
• Used in engineering and architectural drawings.
• Views maintain relative proportions of the object.
Some points about Parallel Projection :
Perspective Projection :
The perspective projection, on the other
hand, produces realistic views but does not
preserve relative proportions. In perspective
projection, the lines of projection are not
parallel. Instead , they all converge at a
single point called the ‘center of projection’
or ‘projection reference point’.
The perspective projection is perhaps the
most common projection technique familiar
to us as image formed by eye or lenses of
photographic film on perspective projection.
The distance and angles are not preserved and parallel lines do not remain parallel.
Instead, they all converge at a single point called center of projection or projection
reference point. There are 3 types of perspective
projections:-
• One point perspective projection is simple to draw.
• Two point perspective projection gives better impression of depth.
• Three point perspective projection is most difficult to draw.
Projection reference point :
 The perspective projection conveys depth information by making distance
object smalls than near one.
 This is the way that our eyes and a camera lens form images and so the
displays are more realistic.
 The disadvantage is that if object have only limited variation , the image may
not provide adequate depth information and ambiguity appears.
Some points about Perspective Projection :
Depth Cueing :
Depth cueing is implemented by having
objects blend into the background color with
increasing distance from the viewer. The range
of distances over which this blending occurs is
controlled by the sliders.
To create realistic image, the depth information is important so that we can easily identify, for a
particular viewing direction, which is the front and which is the back of displayed objects. The
depth of an object can be represented by the intensity of the image. The parts of the objects
closest to the viewing position are displayed with the highest intensities and objects farther
away are displayed with decreasing intensities. This effect is known as ‘depth cueing’.
• To easily identify the front and back of display objects.
• Depth information can be included using various methods.
• A simple method to vary the intensity of objects according to their distance from
the viewing position.
• Eg: lines closest to the viewing position are displayed with the higher intensities
and lines farther away are displayed with lower intensities.
Some points about Depth Cueing :
Visible line and surfaceidentification
I. When we view a picture containing non-transparent objects and surfaces, then we cannot see
those objects from view which are behind from objects closer to eye.
II. We must remove these hidden surfaces to get a realistic screen image. The identification and
removal of these surfaces is called Hidden-surface problem
Removing hidden surface problem
Object-Space method Image-space method
Depth Buffer Z−Buffer Method
 It is an image-space approach. The basic idea is to test the Z-depth of each surface to determine the closest
visible surface.
 To override the closer polygons from the far ones, two buffers named frame buffer and depth
buffer, are used.
Depth buffer is used to store depth values for x,y position, as surfaces are processed
0≤depth≤1
0≤depth≤1
The frame buffer is used to store the intensity value of color value at each position x,y
Scan-Line Method:
 The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of
each line, and pointers into the polygon table to connect edges to surfaces.
 The Polygon Table − It contains the plane coefficients, surface material properties, other surface
data, and may be pointers to the edge table.
Area-Subdivision Method:
A. Surrounding surface − One that completely encloses the area.
B. Overlapping surface − One that is partly inside and partly outside the area.
C. Inside surface − One that is completely inside the area.
D. Outside surface − One that is completely outside the area.
A-Buffer Method:
The A-buffer expands on the depth buffer method to allow transparencies. The key data structure in
the A-buffer is the accumulation buffer.
Each position in the A-buffer has two fields −
 Depth field − It stores a positive or negative real number
 Intensity field − It stores surface-intensity information or a pointer value
 If depth >= 0, the number stored at that position is the depth of a single surface overlapping the
corresponding pixel area. The intensity field then stores the RGB components of the surface color
at that point and the percent of pixel coverage.
If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field then
stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes −
 RGB intensity components
 Opacity Parameter
 Depth
 Percent of area coverage
 Surface identifier
Surface Rendering:
 Surface rendering involves setting the surface intensity of objects according to the lighting
conditions in the scene and according to assigned surface characteristics. The lighting
conditions specify the intensity and positions of light sources and the general background
illumination required for ascene.
 Onthe other hand the surface characteristics of objects specify the degree of transparency
and smoothness or roughness of the surface; usually the surface rendering methods are
combined with perspective and visible surface identification to generate a high degree of
realismin a displayedscene.
Surface Rendering:
Setthe surface intensity of objects accordingto
 Lighting conditions in thescene
 Assignedsurfacecharacteristics
 Lighting specifications include the intensity and positions
of light sources and the general background illumination
required for ascene.
Surface properties include degree of transparencyand
how rough or smooth of the surfaces
3 d display-methods-in-computer-graphics(For DIU)

3 d display-methods-in-computer-graphics(For DIU)

  • 1.
    3D display methodsin computer graphics? SUBMITTED BY : ARAFAT AHMED TANZEER : 162-15-7895
  • 2.
    What is 3ddisplay methods in computer graphics? 3D computer graphics (in contrast to 2D computer graphics) are graphics that utilize a three dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing.
  • 3.
    What we aregoing to talk about : •Parallel Projection. •Perspective Projection. •Depth Cueing
  • 4.
    Parallel Projection: A parallelprojection is a projection of an object in three-dimensional space onto a fixed plane, known as the projection plane or image plane, where the rays, known as lines of sight or projection lines, are parallel to each other. In parallel projection, z co-ordinate is discarded and parallel, lines from each vertex on the object are extended until they intersect the view plane. We connect the projected vertices by line segments which correspond to connections on the original object. As shown in next slide a parallel projection preserves relative proportions of objects but does not produce the realistic views.
  • 5.
    • Project pointson the object surface along parallel lines onto the display plane. • Parallel lines are still parallel after projection. • Used in engineering and architectural drawings. • Views maintain relative proportions of the object. Some points about Parallel Projection :
  • 6.
    Perspective Projection : Theperspective projection, on the other hand, produces realistic views but does not preserve relative proportions. In perspective projection, the lines of projection are not parallel. Instead , they all converge at a single point called the ‘center of projection’ or ‘projection reference point’. The perspective projection is perhaps the most common projection technique familiar to us as image formed by eye or lenses of photographic film on perspective projection.
  • 7.
    The distance andangles are not preserved and parallel lines do not remain parallel. Instead, they all converge at a single point called center of projection or projection reference point. There are 3 types of perspective projections:- • One point perspective projection is simple to draw. • Two point perspective projection gives better impression of depth. • Three point perspective projection is most difficult to draw. Projection reference point :
  • 8.
     The perspectiveprojection conveys depth information by making distance object smalls than near one.  This is the way that our eyes and a camera lens form images and so the displays are more realistic.  The disadvantage is that if object have only limited variation , the image may not provide adequate depth information and ambiguity appears. Some points about Perspective Projection :
  • 9.
    Depth Cueing : Depthcueing is implemented by having objects blend into the background color with increasing distance from the viewer. The range of distances over which this blending occurs is controlled by the sliders. To create realistic image, the depth information is important so that we can easily identify, for a particular viewing direction, which is the front and which is the back of displayed objects. The depth of an object can be represented by the intensity of the image. The parts of the objects closest to the viewing position are displayed with the highest intensities and objects farther away are displayed with decreasing intensities. This effect is known as ‘depth cueing’.
  • 10.
    • To easilyidentify the front and back of display objects. • Depth information can be included using various methods. • A simple method to vary the intensity of objects according to their distance from the viewing position. • Eg: lines closest to the viewing position are displayed with the higher intensities and lines farther away are displayed with lower intensities. Some points about Depth Cueing :
  • 11.
    Visible line andsurfaceidentification I. When we view a picture containing non-transparent objects and surfaces, then we cannot see those objects from view which are behind from objects closer to eye. II. We must remove these hidden surfaces to get a realistic screen image. The identification and removal of these surfaces is called Hidden-surface problem Removing hidden surface problem Object-Space method Image-space method
  • 12.
    Depth Buffer Z−BufferMethod  It is an image-space approach. The basic idea is to test the Z-depth of each surface to determine the closest visible surface.  To override the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used. Depth buffer is used to store depth values for x,y position, as surfaces are processed 0≤depth≤1 0≤depth≤1 The frame buffer is used to store the intensity value of color value at each position x,y
  • 13.
    Scan-Line Method:  TheEdge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of each line, and pointers into the polygon table to connect edges to surfaces.  The Polygon Table − It contains the plane coefficients, surface material properties, other surface data, and may be pointers to the edge table.
  • 14.
    Area-Subdivision Method: A. Surroundingsurface − One that completely encloses the area. B. Overlapping surface − One that is partly inside and partly outside the area. C. Inside surface − One that is completely inside the area. D. Outside surface − One that is completely outside the area.
  • 15.
    A-Buffer Method: The A-bufferexpands on the depth buffer method to allow transparencies. The key data structure in the A-buffer is the accumulation buffer. Each position in the A-buffer has two fields −  Depth field − It stores a positive or negative real number  Intensity field − It stores surface-intensity information or a pointer value  If depth >= 0, the number stored at that position is the depth of a single surface overlapping the corresponding pixel area. The intensity field then stores the RGB components of the surface color at that point and the percent of pixel coverage.
  • 16.
    If depth <0, it indicates multiple-surface contributions to the pixel intensity. The intensity field then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes −  RGB intensity components  Opacity Parameter  Depth  Percent of area coverage  Surface identifier
  • 17.
    Surface Rendering:  Surfacerendering involves setting the surface intensity of objects according to the lighting conditions in the scene and according to assigned surface characteristics. The lighting conditions specify the intensity and positions of light sources and the general background illumination required for ascene.  Onthe other hand the surface characteristics of objects specify the degree of transparency and smoothness or roughness of the surface; usually the surface rendering methods are combined with perspective and visible surface identification to generate a high degree of realismin a displayedscene.
  • 18.
    Surface Rendering: Setthe surfaceintensity of objects accordingto  Lighting conditions in thescene  Assignedsurfacecharacteristics  Lighting specifications include the intensity and positions of light sources and the general background illumination required for ascene. Surface properties include degree of transparencyand how rough or smooth of the surfaces