Normal mapping is a technique used in 3D computer graphics to add detail to 3D models without increasing the number of polygons. It works by encoding normal vector information for light calculation into RGB texture maps. This allows more detailed surface shapes and lighting than would be possible with just the base polygon mesh. The technique was introduced in the late 1990s and became widely used in video games starting in the early 2000s as hardware accelerated shaders became available, enabling real-time normal mapping rendering. It provides a good quality to performance ratio for complex surface details.
Normal Mapping
In 3Dcomputer graphics, normal
mapping, or "Dot3 bump mapping", is a
technique used for faking the lighting of
bumps and dents – an implementation of
bump mapping. It is used to add details
without using more polygons.
3.
Normal Mapping
A commonuse of this technique is to
greatly enhance the appearance and details of
a low polygon model by generating a normal
map from a high polygon model or height
map.
Normal maps are commonly stored as
regular RGB images where the RGB
components correspond to the X, Y, and Z
coordinates, respectively, of the surface
normal.
History of NormalMapping
The idea of taking geometric details from a
high polygon model was introduced in "Fitting Smooth
Surfaces to Dense Polygon Meshes" by Krishnamurthy
and Levoy, Proc. SIGGRAPH 1996, where this approach
was used for creating displacement maps over nurbs.
In 1998, two papers were presented with key
ideas for transferring details with normal maps from
high to low polygon meshes: "Appearance Preserving
Simplification", by Cohen et al. SIGGRAPH 1998, and
"A general method for preserving attribute values on
simplified meshes" by Cignoni et al. IEEE Visualization
'98.
6.
History of NormalMapping
The former introduced the idea of storing surface
normals directly in a texture, rather than displacements,
though it required the low-detail model to be generated by
a particular constrained simplification algorithm. The
latter presented a simpler approach that decouples the
high and low polygonal mesh and allows the recreation of
any attributes of the high-detail model (color, texture
coordinates, displacements, etc.) in a way that is not
dependent on how the low-detail model was created. The
combination of storing normals in a texture, with the more
general creation process is still used by most currently
available tools.
7.
How It Works?
Tocalculate the Lambertian (diffuse) lighting of a
surface, the unit vector from the shading point to the light
source is dotted with the unit vector normal to that
surface, and the result is the intensity of the light on that
surface.
Imagine a polygonal model of a sphere - you can
only approximate the shape of the surface.
By using a 3-channel bitmap textured across the
model, more detailed normal vector information can be
encoded. Each channel in the bitmap corresponds to a
spatial dimension (X, Y and Z).
8.
How It Works?
Thesespatial dimensions are relative to a
constant coordinate system for object-space
normal maps, or to a smoothly varying coordinate
system (based on the derivatives of position with
respect to texture coordinates) in the case of
tangent-space normal maps. This adds much more
detail to the surface of a model, especially in
conjunction with advanced lighting techniques.
9.
How It Works?
Sincea normal will be used in the dot product
calculation for the diffuse lighting computation, we
can see that the {0, 0, –1} would be remapped to the
{128, 128, 0} values, giving that kind of sky blue color
seen in normal maps (blue (z) coordinate is
perspective (deepness) coordinate and RG-xy flat
coordinates on screen). {0.3, 0.4, –0.866} would be
remapped to the ({0.3, 0.4, –0.866}/2+{0.5, 0.5,
0.5})*255={0.15+0.5, 0.2+0.5, -0.433+0.5}*255={0.65,
0.7, 0.067}*255={166, 179, 17} values
10.
How It Works?
Thesign of the z-coordinate (blue channel)
must be flipped to match the normal map's normal
vector with that of the eye (the viewpoint or camera)
or the light vector. Since negative z values mean that
the vertex is in front of the camera (rather than
behind the camera) this convention guarantees that
the surface shines with maximum strength precisely
when the light vector and normal vector are
coincident.
11.
How It Works?
Exampleof a normal map (center) with the scene it was
calculated from (left) and the result when applied to a flat
surface (right).
Calculating Tangent Space
Inorder to find the perturbation in the
normal the tangent space must be correctly
calculated. Most often the normal is
perturbed in a fragment shader after applying
the model and view matrices. Typically the
geometry provides a normal and tangent.
14.
Calculating Tangent Space
Thetangent is part of the tangent plane
and can be transformed simply with the linear
part of the matrix (the upper 3x3). However,
the normal needs to be transformed by the
inverse transpose. Most applications will want
cotangent to match the transformed geometry
(and associated UVs).
15.
Calculating Tangent Space
Soinstead of enforcing the cotangent to be
perpendicular to the tangent, it is generally
preferable to transform the cotangent just like the
tangent. Let t be tangent, b be cotangent, n be
normal, M3x3 be the linear part of model matrix, and
V3x3 be the linear part of the view matrix.
Normal Mapping inVideo Games
Interactive normal map rendering was originally
only possible on PixelFlow, a parallel rendering
machine built at the University of North Carolina at
Chapel Hill. It was later possible to perform normal
mapping on high-end SGI workstations using multi-
pass rendering and framebuffer operations or on low
end PC hardware with some tricks using paletted
textures.
However, with the advent of shaders in
personal computers and game consoles, normal
mapping became widely used in commercial video
games starting in late 2003.
19.
Normal Mapping inVideo Games
Normal mapping's popularity for real-
time rendering is due to its good quality to
processing requirements ratio versus other
methods of producing similar effects. Much of
this efficiency is made possible by distance-
indexed detail scaling, a technique which
selectively decreases the detail of the normal
map of a given texture (cf. mipmapping),
meaning that more distant surfaces require
less complex lighting simulation.
20.
Normal Mapping inVideo Games
Basic normal mapping can be implemented in
any hardware that supports palettized textures. The
first game console to have specialized normal
mapping hardware was the Sega Dreamcast.
However, Microsoft's Xbox was the first console
to widely use the effect in retail games. Out of the
sixth generation consoles, only the PlayStation 2's
GPU lacks built-in normal mapping support. Games for
the Xbox 360 and the PlayStation 3 rely heavily on
normal mapping and are beginning to implement
parallax mapping.
The Nintendo 3DS has been shown to support
normal mapping, as demonstrated by Resident Evil
Revelations and Metal Gear Solid: Snake Eater.