Chapter 3
Pixels, Images and
Image M e s
In Chapter 1, we saw that the computer display can be treated as a two-
dimensional grid of pixels. Pixels can be addressed by their (x,y) coordinates
which represent their horizontal and vertical distance from the origin. All shapes
(even three dimensional ones) ultimately need to identify the location of the
appropriate pixels on the screen for display-a process called rasterization. In
Chapter 2, we learned how to transform these shapes to achieve motion. We used
OpenGL to render many of the concepts we learned. The two-dimensional array
of colors that we created to define our image can be saved in a file for later use.
This file is referred to as a raster image file.
Images need not be created only by the computer. Digital cameras, scanners,
etc. can all save images to a file and send the file to the computer. The real value
of saving images on the computer is what can be done to the image once it
resides on the computer. You may have played with photo editors, which let you
manipulate your photos-to reduce red eye for example. With good image
processing tools, there is no end to the magic you can do. Photoshop by Adobe
provides one of the most sophisticated image processing packages on the market.
In this chapter, we will learn the following concepts:
Image files (in particular, the BMP file format)
How to save your OpenGL images to a file
How to view image files within your OpenGL program
How to manipulate pixel data from an image
3.7 Raster Image Files
A raster image file is a file that stores the mapping of pixel coordinates to color
values. That is, the file saves the color at each pixel coordinate of the image. The
saved image files can then be used for various purposes such as printing, editing,
etc. Some commonly used image file types are: BMP, TIFF, GIF, and P E G files.
A raster file contains graphical information about the colors of the pixels. It
also contains information about the image itself, such as its width and height,
format, etc. Let us look into the details of a commonly used raster file format
called BMP.
The BMP File Format
Tlie BMP file format is the most common graphics format used on the Windows
platform. The motivation behind creating the BMP format was to store raster
image data in a format independent of the color scheme used on any particular
hardware system. The color schemes supported in BMP are monochrome, color
indexed mode, and RGB mode. Support for a transparency layer (alpha) is also
provided. Let us look into the structure of a BMP file. The data in BMP files is
stored sequentially in a binary format and is sometimes compressed. Table 3.1
shows the basic format of the bmp file which consists of either 3 or 4 parts.
e BMP file
4
- Palette (indexed mode)
Color
Pixel data
Table 3.1: Parts of the BMP file
The first part of the file is the header. The header contains information about the
type of image (BM, for BMP), its size, and the position of the actual image data with
respect to the start of the file. The header is defined in a structure as shown below:
typedef struct tagBlTMAPFlLEHEADER {
WORD bffype; /I must be "BM"
DWORD b f S i ; I/ size of file
WORD bfResenred1;
WORD bfResenred2;
DWORD bf0ffBits; //offset to start of file
) BITMAPFILEHEADER
The second part of the BMP file is the image information section. Information
such as the image width and height, type of compression and the number of
colors is contained in the information header.
The image information data is described in the structure given below. The
fields of most interest are the image width and height, the number of bits per
pixel (which should be 1, 4, 8 or 24), and the compression type. Compressions
are techniques applied to files to reduce their size. Compression techniques are
especially usehl in the case of raster image files, which tend to be huge.
typedef struct tagBITMAPINFOHEADER{
DWORD biSize;
LONG biWidth;
LONG biHeight;
WORD biplanes;
WORD biBiiCount;
DWORD bicornpression;
DWORD bisizelrnage;
LONG biXPelsPerMeter;
LONG biYPelsPerMeter;
DWORD biClrUsed;
DWORD biclrlrnportant;
) BITMAPINFOHEADER
The compression types supported by BMP are listed below:
0: no compression
I: 8 bit run length encoding
2: 4 bit run length encoding
3: RGB bitmap with mask
We will assume no compression of bitmap files in this book. Refer to
BOVIOO if you would like more information on imaging and compression
techniques.
If the image is in index color mode, then the color table information follows
the information section. We will assume only RGB mode files in this book.
Last of all is the actual pixel data. The pixel data is stored by rows, left to
right within each row. The rows are stored from bottom to top. The bits of each
pixel are packed into bytes, and each scan line is padded with zeros to be aligned
with a 32-bit boundary.
Since the bitmap image format is so simple, reading bitmap files is also very
simple. The simplest data to read is 24-bit true color images. In this case the
image data follows immediately after the information header. It consists of three
bytes per pixel in BGR (yes, it's in reverse) order. Each byte gives the intensity
for that color component-0 for no color and 255 for fully saturated. A sampling
of code required to read a 24-bit image BMP file would look as follows:
11 Read the file header and any following bitmap information...
-
if llfp fopenlfilename, "rb"))
return (NULL);
--
NULL1
11 Read the file header
freadl&header, sizeof(BITMAPFlLEHEADER),l, fp)
-
if 1header.bffype ! 'MB'I
reversed.
I1 Check for BM
{
11 Not a bitmap file
fcloselfp);
return (NULL];
1
infosize - header.bf0ffBits
siieoflBITMAPFILE
-
HEADER);
fread(*info, 1, infosii, fp)
-
imgsize (*info)->bmiHeader.biSiielmage;
11 sometimes imagesiie is not set in files
if (imgsize -- -
01
imgsize ((*info)->bmiHeader.biWidth *
(*info)->bmiHeader.biBitCount + 71 18 *
abs(1"info)-> bmiHeader.biHeight1;
freadlpixels, 1, imgsize, fp);
Microsoft Windows provides the BMP file data structures definitions in
<wingdi.h>. On a Linux platform, you may have to define the structs yourself.
The pixels variable is a pointer to the actual bytes of color data in the image.
Conceptually, you can think of it as rows and columns of cells containing the
color values of each pixel. (Can you see the analogy between this structure and
the frame buffer?)
Fig.3.1: Array of RGB values
The code required to save a BMP file given the pixel data, is analogous.
We have provided C code to read and save bmp files. It can be found in the
files: bmp.cpp and bmp.h under the directory where you installed the example
code.. The functions have the following signatures
extem GLubyte 'ReadBiimap(constchar "filename,BlTMAPINFO "info);
extem int SaveBiimap(constchar 'filename, BITMAPINFO 'info,GLubyte 'bits);
where BITMAPINFO is a struct containing the BITMAPINFOHEADER struct and a
color map (in the case of color index mode files).
SaveBitmap requires and ReadBitmap returns a pointer to the bytes of color
information. In bmp.h, we have commented out the BMP header struct definitions.
If you use Microsoft libraries, then you will not need to define them as they are
defined in wingdi.h.
3.2 Bitmaps and Pixmaps
Thepixels variable we saw in the last section is referred to as apixmap. A pixmap is
simply a structure that holds the color information of an image in memory. This
information can then be transferred to the frame buffer for rendering on screen or can
be saved back again (presumably after being manipulated in some way) as a file.
Historically, pixmap defines a color image whereas its counterpart, the bitmap is
monochrome (black and white only). Bitmaps have only one bit for every pixel (a value
of 1 is white and a value of 0 is black). But many people use the terms interchangeably.
Let us see how we can use OpenGL to save our work to an image file.
Saving your work to an Image file
The OpenGL function,
reads a block of pixels from the frame buffer and returns a pointer to the pixel
data-yes it returns a pixmap! The exact signature for the glReadPixels function is
void glReadPixeIs(Gl.int x,GLint y,GLsizei width,GLsiiei height,
GLenum forrnat,GLenum type, GLvoid 'piuds)
The function returns the pixel data from the frame buffer, starting with the pixel
whose lower left corner is at location (x, y), into client memory pointed to be the
variable: pixels. glReadPixels returns values for each pixel
(x + i, y + j) for 0 <= i < width and 0 <= j < height.
Pixels are returned in row order from the lowest to the highest row and from
left to right in each row. Typically, one would like to read the entire content of
the frame buffer - as defined by the extents of the viewport.
format specifies the color mode of the pixmap. GL-COLOR-INDEX for color-
indexed pixels, GL-RGB for RGB pixels, and GL-BGR-EXT for RGB-mode based
BMP files (since BMP files reverses the order of the R,G and B components).
type specifies the bit size of each color component. GL-BYTE or
GL-UNSIGNED-BYTE is used for 8-bit values (which are used by RGB-mode
images), GL-BITMAP is used for one-bit values (monochrome images) etc.
Several other parameters control the processing of the pixel data before it is
placed into client memory. These parameters are set with three commands:
glPixelStore, glPixelTransfer, and glPixelMap.We shall not look into the details
of these calls here. Interested readers are encouraged to refer to SHRE03 for
more information. For your convenience, we have defined a function:
GLubyte *
ReadBitmapFromScreenlBITMAPINFO **info)
that reads the current frame buffer, constructs an appropriate BITMAPINFO, and
returns the constructed pixmap. This function is declared in the file: bmp.h and
defined in the file: bmp.cpp. It is useful (but not necessary) to look at the code to
understand how to setup OpenGL to read pixel values from the frame buffer.
Recall the stick figure we drew in Examplel-3. In Example3-I, we save the
stick figure to a BMP file called stick.bmp in the same directory as the
executable. The code to read and save the state of the frame buffer is as follows:
BITMAPINFO *info;
-
GLubyte "pixels ReadBitmapFromScreeni&info);
SaveBitmapi"stick.bmp", info,pixels);
These lines of code are to be called after all drawing operations are complete,
in order that the frame buffer is completely defined. The generated BMP file can
now be loaded into your Microsoft Paint or any other program for further editing.
The code to draw out the stick figure and save it to a file can be found under
Example3-I/Example3~l.cpp.You will need to compile and link it with the
bmp.cpp file that we have provided.
Loading an Image file and using it in OpenGL
Let us see how we can use OpenGL to load an image file.
OpenGL provides the library function
to take a bitmap and transfer its contents to the frame buffer for drawing to the
screen. The hnction
glDrawPixelsiGLSizei width, glSizei height, Glenum format, Glenum type, Ghroid * pixels)
draws pixmaps onto the screen. It accepts five arguments:
Where width and height specifies the dimensions of the pixmap.
format specifies the color mode of the pixmap-GL-BGR-EXT for RGB
mode BMP files.
type specifies the bit size of each color component: in our case,
GL-UNSIGNEDBYTE.
And finally comes the actual pixel data.
The location of the bottom left comer of the pixmap on the application
window is determined by the most recent call to the hnction
where x and y are the world coordinates along the x- and y-axis. This position is
affected by any transformations we may apply to the current object state.
A point to note about bitmaps and pixmaps is that they are defined upside
down! You can specify a negative height to invert the image.
In Example3-2, we read in a 64 by 64-sized pixmap. The image has a black
background and a yellow colored smiley face.
Fig.3.2: A Smiley face
We keep redrawing this pixmap, bouncing it up and down in our window.
Additionally, we also bounce our ball from Chapter 2 so as to demonstrate that
we can still make valid OpenGL drawing requests.
void Displaylvoidl
{
glClearlGL COLOR-BUFFER-BIT);
gl~aster~os2f1100,~osl;
I1 Draw the loaded pixels
glDrawPixelslinfo-> bmiHeader.biWidth,
info->bmiHeader.biHeight,
GL BGR EXT, GL-UNSIGNED-BYTE, pixels);
I1 draw the ball;t th&ame Y position
draw-ballWO.,yposl;
lldont wait, start flushing opengl calls to display buffer
glFlush0;
glutSwapBuffers0;
11 120 is max Y value in our logical coordinate system
- -
ypos ypos + 0.1;
if (ypos > 120.1
-
ypos 0;
11 Force a redisplay
glutPostRedisplayl1;
1
Notice that we call the ball drawing code using its center as a parameter. This
approach ensures that we draw the ball with a translated center without using the
OpenGL transformation functions. The reason is that the function glRasterPos is
affected by transformations. This, in turn means that it can be hard to control the
exact location of our smiley face. We shall see techniques on how to handle this
issue in a later chapter. The main () function reads in thc dcsired bitmap.
BITMAPINFO "info; 11 Bitmap information
GLubyte "pixels; 11 Actual pixel data
11 read in image file as specified on the command line
if (argc > 11
pixels - ReadBitmap(argv[ll, &info);
1
The code for this example can be found in Example3-I/ Example3-1.cpp.
Default images are located undcr the directory Images under thc installed folder
for the sample programs.
Notice thc location of the pixmap versus the ball. Why is the ball located at
a position lower than the image? Try to change the code so that the ball and the
image bouncc randomly along thc x- and y-axcs. For a really cool projcct, makc
the program exit when the two shapes collide. In the next chapter, we will see
how to make these kinds of shapcs movc based on user input.
Loading more than one image
In most games of today, you see characters moving across the screen in front of
a static background image. In Example3-3, we read in two pixmaps. One is the
smiley face we just saw, and the other is a background image of the Alps. We
make the background image cover the entire window by scaling it up using the
function
glPixelZoom~x,yl
This function will scale up the image by the specified x and y factors.
CHAPTER3 PIXELS,IMAGES
AND IMAGE
FILES
The code to draw the background p i i a p is shown below:
if lbgpixelsl {
glRasterPos2il0,Ol;
I1 scale the image appropriately and then draw the background image
glPiielZooml0.5,0.8l;
glDrawPixelslbginfo-> bmiHeader.biWidth,
bginfo-> bmiHeader.biHeight,
GL-BGR-EXT, GL-UNSIGNED-BYTE, bgpixelsl;
1
The motion of the smiley face is the same as in Example3-2. The entire code can
be found in Example3-3/Example3-3.cpp.In the above example, we redraw the
entire background image at every call to the display function. Unfortunately, this
approach leads to significantly slow performance. For complex images, the
computational and redisplay rate of the computer may not be fast enough to
display convincing motion. In the movie world, the images are saved and
replayed at a later time, so the slow re-display rate is not an issue.
If you save each image drawn into a bitmap file and name the saved files
sequentially (like testl.bmp, test2.bmp, test3.bmp), you can string the images
together using a movie editor such as QuickTime Pro. The editor strings together
the images in a sequence to generate a movie file--usually an MPEG or an
MPEG-4 file. The movie file can be transferred directly to tape, or can be played
by a movie player such as QuickTime. The movie file is optimized for playback
and since the player is not actually calculating the images, the playback is fast
enough to convey believable motion. This technique is also employed in
streaming videos over the Internet.
For real-time games, where saving images and then replaying them is not an
option, speed can be accomplished using other tricks. One technique is to re-
draw only the pixmap of the character, leaving the background as is, using what
we call overlay planes.
Graphical overlay planes are made up of additional memory positioned
logically on top of the frame buffer (thus the name overlay). Typically this
transparent pixel
0 Overlay Buffer
Fig.3.3: The overlay plane concept.
Frame Buffer
creates an overlay buffer that does not disturb the contents of the existing frame
buffer, as shown in Fig.3.3.
Drawing and erasing in the overlay plane does not affect the frame buffer.
Anything drawn in the overlay plane is always visible, and usually a color of 0
renders the overlay pixel transparent (that is, you can see the frame buffer at these
values). As the frame buffer is drawn, the x-y-coordinates of each pixel are checked
against the overlay buffer pixel to see if it is nontransparent. If so, then the overlay
pixel is drawn instead. Popup menus are usually implemented using overlay planes.
Overlay planes are not supported natively by most standard graphics cards,
so we do not implement them in this book. Refer to [SHRE03] for more details
on how to develop code using overlay planes.
There are other ways we can get around the speed issue when performing
intensive pixel copying and erasing. A common technique is to use logical
operations on the pixels. Logical operations form the basis for many image
processing techniques, so we will devote the next section on it.
3.3 Computer Display Systems
Logical operations are performed between two data bits (except for the NOT
operation, which is performed on one). Bits can be either 1 or 0 (sometimes
referred to as TRUE and FALSE). The most basic logical operators that can be
performed on these bits are: AND, OR, XOR, and INVERTINOT. These
operators are essential to performing digital math operations. Table 2 shows the
values (truth table) for these operations.
Since pixels are nothing more than a series of bits in a buffer, they can be
combined using bitwise logical operations (that is, we apply the logical
operations to each corresponding bit). The operations are applied between the
incoming pixel values (source) and those currently into the frame buffer
Operation Value
0 AND 0 0
1 AND 0 0
1 AND 1 1
OORO 0
1OR0 1
10R1 1
0 XOR 0 0
1 XOR 0 1
1 XOR 1 0
NOT(0) 1
NOT(1)
- --- 0
Table 3.2: Logical operations
(destination). The resultant pixel values are saved back in the frame buffer. For
true color pixels, the operations are performed on the corresponding bits of the
two pixel values, yielding the final pixel value.
Logical operations are especially useful on bit-blt type machines. These
machines allow you to perform an arbitrary logical operation on the incoming
data and the data already present, ultimately replacing the existing data with the
results of the operation, all in hardware. Since this process can be implemented
fairly cheaply and quickly in hardware, many such machines are available. All
gaming hardware, such as Nintendo and Xbox, support these operations in
hardware for fast implementations of pixellbit copying and drawing.
In Fig. 3.4, we show the OR operator applied to two monochrome (single
bit) pixel values. The bits with value 1 are shown in a white color, and those with
value 0 are shown in black.
Incoming pixels Destination pixels Resultant pixels
(frame buffer) (saved back into
the frame buffer)
Fig.3.4: The OR operation
ORing the source over the destination combines the two pixel values. Zero-
value pixels in the source are effectively transparent pixels, since the destination
retains its pixel value at these points. The same operation can be performed on
RGB pixels by doing a bitwise OR operation.
An AND operator uses the source as a mask to select pixels in the destination
and clears out the rest, as shown in Figure .
Fig.3.5: The AND operation
Image compositing, where two images are combined in some manner to
create a third resultant image, make use of these two operations heavily.
XORing the source over the destination guarantees that the source stands out.
This technique is used very often in rubber-banding, where a line or a rectangle
is dragged around by the user on top of a background image.
Fig.3.6: Rubber-banding: XOR of a line with the background image
The neat thing about an XOR operation is that two XORs generates the same
values back again. That is,
(A XOR B) XOR A = B.
This operation is often used to to erase a previously drawn image. If you have
ever tried selecting a number of programs on the Windows desktop, now you
Fig.3.7: XORing twice
know that it's done are using the XOR operator.
In Example3-I and Example3-2, we had to redraw the entire background.
Instead, we can use the XOR operation to first draw the smiley face and then
XOR once again to erase only the portion we drew into. This technique tends to
be a much faster operation than redrawing the entire background and is therefore
used extensively in gaming.
A note of caution!. This approach works best for constant color images, since
the colors do get flipped on the first XOR. For multicolor images, the XOR
operation may result in psychedelic colors being produced. As a result, XOR
should be used only when the image colors are designed to work for it. Let us
see how we can modify Example3-1 to use logical operations.
The OpenGL command to set a logical operation to be applied to incoming
values is
The operation can be of type GL-XOR, GL-AND, GL-OR or GL-NOT.
Logical operations need to be enabled before they will have effect. To do this,
call the hnction
glDisable() will disable logical operations from occuring.
In the display code shown below, we clear out the background only once at
startup. Then we use two XOR operations, one to draw the smiley face and then
another to erase it. When you run the program you will notice that this process
does flip the color of the smiley face!
if (FIRSTDISPLAY) {
11 Only clear the background the first time
glClear(GL-COLOR-BUFFER-BIT);
-
FIRSTDISPLAY FALSE;
11 enable logical operations
glEnable(GL-COLOR-LOGIC-OP);
11 Set logical operation to XOR
glLogicOplGL-XOR);
) else {
11 XOR incoming values with pixels in frame buffer
11 the next two lines will erase the previous image drawn
glRasterPos2f(prevxIprevyl;
glDrawPixels(info->bmiHeader.biWidth,
info-> bmiHeader.biHeight,
GL-BGR-UCT, GL-UNSIGNED-BYTE, piielsl;
1
I1 the next two lines draws in a new (XORed) image
glRasterPos2f~xpos,ypos~;
glDrawPixelslinfo-> bmiHeader.biWidth,
info-> bmiHeader.biHeight,
GL-BGR-UCT, GL-UNSIGNED-BYTE, pixels);
An additional bonus: since the smiley face has a black background (recall
that black is the transparent color by default), the resultant display only draws
the face, not the entire rectangular image. The entire code can be found in
Example3-4/Example 3-4.cpp. Try using the background image of the Alps and
see whether the XOR operation results in a desirable output.
3.4 lmage Enhancements
The real value of saving images on the computer is image processing - what can
be done to the image once it is resident on the computer. Almost all production
studios use a processing tool to perform final image touchups, called post-
production, to enhance the quality of their images. These images can be from livc
action footage or from computer-generated imagery. Effects like blending,
compositing, and cleaning up pixel values are used routinely in productions. The
idea behind image processing is simple: manipulate the pixel information of the
given image using some mathematical functions and save the result in a new
image file. We discuss two techniques in this section, namely, compositing and
red-eye removal.
Refer to [PORT841 for information on other image processing techniques.
lmage Compositing
Compositing is the most widely used operation in the post-production of films,
commercials, and even TV broadcasts. The basic operation in image compositing
is to overlay one (usually nonrectangular) image on top of the other. Recall from
Exarnple3-2, when we drew the smiley face on top of the background, we
displayed the entire image, not just the face, as we would have liked. XORing
flips the colors, so that is not always a good solution either. Compositing to the
rescue!
Various techniques are used for compositing images. One popular technique
is the blue screen process. First, the subject is photographed in front of an evenly
lit, bright, pure blue background. Then the compositing process, whether
photographic or electronic, replaces all the blue in the picture with the
background image, known as the background plate. In reality, any color can be
used for the background, but blue has been favored since it is the complementary
color to flesh tone. The source and destination images can come from livc action
images or be computer generated. Jurassic Park employed compositing
extensively to composite CG-generated dinosaurs in front of real livc shots.
The essential compositing algorithm is as shown below. It designates Blue or
the desired color as transparent. The source image is copied on top of a defined
destination image as follows:
-
for y 1 to height
-
for x 1 to width
if image[x, y] o transparent pixel then
copy irnageh yl
else
leave the destination unchanged
Fig. 3.8: Cornpositing Algorithm
The only problem, of course, is that "transparent" color can't be used in the
source image. Another technique for compositing, being used more frequently
now makes use of a mask or stencil. This technique uses logical operations as
follows:
Create a mask of the image. Many production houses create the mask
as part of the alpha channel of the image. The mask essentially
identifies the desired areas of the source image.
AND the source with the mask (not always required).
AND the background with NOT of mask
OR the results to produce the final composite.
AND AND
Fig. 3.9: Image Composition using masks
We show below, sample code for compositing the smiley face on top of the
background image of the Alps. The mask was created as a BMP image using
Adobe Photoshop. The entire code can be found under Example3_5/Example
3-5.cpp.
/I First copy background image into frame buffer
glLogicOplGL COPY);
gl~aster~os2i(0,0);
glPielZoom(0.5,0.8);
glDrawPixelslbginfo->bmiHeader.biWidth, bginfo->bmiHeader.biHeight,
GL-BGR-EXT, GL-UNSIGNED-BYTE, bgpixels);
I/ perform an AND with destination and NOTlsource - mask)
glLogicOp(GL-AND-INVERTED);
glRasterPos2flxpos,ypos~;
glPixelZooml1.0,1.0~;
glDrawPixels(maskinfo->bmiHeader.biWidth, maskinfo-> bmiHeader.biHeight,
GL-BGR-EXT, GL-UNSIGNED-BYTE, maskpixels);
I/ Perform an OR with source- smiley
glLogicOplGL-OR);
glDrawPixels(info->bmHeader.biWidth info->bmiHeader.biHeight,
GL-BGR-EXT, GL-UNSIGNED-BYTE, pixels);
You will see a composite image of the smiley face on top of the Alps-just
what we wanted!
Red-Eye Treatment
Image enhancements are possible because we have access to the actual pixel
information of the image.-we can target specific pixels based on some criteria
and then change their values. You may have used a very common feature in
image processing tools-red-eye removal.
Let us see how we can implement this feature for ourselves.
In Example3-6, we load in a smiley face with red eyes. The eyes are made
up of pixels with a color value of (255,0,0). We will target pixels with this color
and change them to blue.
To do this, we first determine the length of each row. Using the length, we
can point to any desired row in the pixmap as the pixel-row variable. Then we
loop through the columns of pixel values in this row.
If we determine the pixel value in any column has a red color, we change it
to blue. The code to do this is shown below:
I/ length of each row -
//remember that the color component values are padded to a 32 bit value
-
int length (info->bmiHeader.biWidth 3 + 3) & 3;
I/ for each row
-
-
for (row 0; row < info-> bmiHeader.biHeight; row + + 1 {
-
pixel-row pixels + row*length;
//Now we can loop through all the pixel values in this row as shown
-
for (col- 0; c o k info-> bmiHeader.biWidth; col+ +, pixel-row + 3) {
11 Is the pixel value at this row and col red?
11 Remember!! BMP files save color info in reverse order (B,G,RI
--
if (pixel row[O] 0 && pixel rowtll
11 yes, cThange the piiel color to-blue.
-- 0 && pixel-row[21 - - 255l{
-
pixel-row[OI 255; pixel-rowt21 0;-
1
The entire code can be found in Example3-6/Example3-6.cpp. When you
run this code, the display will show a blue-eyed smiley face! You can save the
result to a file to verify that we did indeed perform red-eye removal. This is an
oversimplification of the actual mechanics behind red eye reduction, but you get
the idea. Now when you use a processing tool to get rid of the reds, you will
know what kind of technology is working in the background.
Summary
In this chapter, we have seen how images are stored on the computer. Any
kind of graphics, whether a visually realistic simulation of the environment or a
simple 2D drawing, is eventually saved as a 2D raster image on the computer.
Storing and viewing these images is just the tip of the iceberg: we can use these
images in our own projects and even modify them to achieve desired results and
special effects. The techniques we have discussed here are used extensively in
production suites for image processing and gaming. They form the basis for the
more complicated image enhancements and effects used in the industry. In the
next chapter, we shall put together all our current working knowledge to design
and develop a 2D game.