KEMBAR78
Chapter Three | PDF | Shader | Rendering (Computer Graphics)
0% found this document useful (0 votes)
10 views77 pages

Chapter Three

The document provides an overview of the rendering process in computer graphics, detailing how 3D models are converted into 2D images using techniques that enhance realism while balancing performance. It compares the graphics pipeline and rendering process, discusses the importance of a reference model for understanding and optimizing rendering, and introduces the Open Graphics Library (OpenGL) as a key graphics API. Various artistic and computational aspects of rendering are explored, highlighting the interplay between creative decisions and technical execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views77 pages

Chapter Three

The document provides an overview of the rendering process in computer graphics, detailing how 3D models are converted into 2D images using techniques that enhance realism while balancing performance. It compares the graphics pipeline and rendering process, discusses the importance of a reference model for understanding and optimizing rendering, and introduces the Open Graphics Library (OpenGL) as a key graphics API. Various artistic and computational aspects of rendering are explored, highlighting the interplay between creative decisions and technical execution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

1

Chapter Three: Introduction to the Rendering


Process with OpenGL (API)
COMPUTER GRAPHICS
COURSE NUMBER: COSC3072
PREREQUISITE: COMPUTER PROGRAMMING (COSC 1012)

Compiled by: Kidane W.


Rendering Process 2

 Therendering process in computer graphics refers to the


sequence of operations used to generate an image from
a scene, which typically consists of 2D or 3D objects.
 The goal is to convert 3D models into a 2D image using a
virtual camera.
 Therendering process can significantly enhance realism
by simulating realistic lighting, textures, and camera
effects. However, achieving high levels of realism often
comes at the cost of performance, necessitating
optimizations to ensure real-time applications run
smoothly.
Graphics Pipeline Vs. Rendering 3
Process

Feature Graphics Pipeline Rendering Process

Definition GPU-centric flow of rendering data Full system-level approach to rendering

Scope Narrow: core stages of GPU rendering Broad: includes asset management, draw calls

Typical Implementation Fixed by APIs (OpenGL, Vulkan) Customized by applications/engines

Focus How data becomes pixels How a scene becomes a frame

Involves GPU only? Mostly yes CPU + GPU coordination

Example Vertex → Fragment → Framebuffer Load model → Set camera → Submit → Draw
Discussion Point 4
 How do you think the rendering process impacts the realism and
performance of graphics applications? The rendering process
significantly affects both the realism and performance of graphics
applications.
REALISM: PERFORMANCE:
• Lighting and Shading: Advanced techniques like global • Complexity of Algorithms: Advanced rendering techniques that
illumination, ray tracing, and ambient occlusion help produce realism often require more computational resources.
simulate how light interacts with objects, creating realistic For example, ray tracing, which calculates the path of light rays,
shadows, reflections, and refractions. These techniques is much more computationally expensive compared to
mimic how light behaves in the real world, enhancing visual rasterization. This can slow down performance, especially in real-
fidelity. time applications such as video games.
• Textures and Materials: The application of high-resolution • Level of Detail (LOD): Optimizing performance often involves
textures and complex materials (e.g., using shaders to techniques like LOD, where less detailed models are rendered
simulate skin, water, or metal) can contribute to a lifelike when objects are far away from the camera. This reduces the
appearance. The realism of an object can depend on how number of computations, improving performance while
accurately these materials reflect light and how they are maintaining visual quality for close-up objects.
affected by environmental conditions. • Parallel Processing and Hardware Acceleration: The use of GPU
• Depth of Field and Motion Blur: Simulating camera effects acceleration, which allows for parallel processing of multiple
like depth of field and motion blur can make the scene feel rendering tasks, can help balance the tradeoff between
more immersive and realistic by emulating how cameras performance and realism. However, not all devices may have
and human eyes perceive the world. access to such hardware, limiting performance in some cases.
Discussion Point 5
 Why is rendering considered both an artistic and computational
process? Rendering is considered both artistic and computational
because it blends creative expression with technical execution.

Artistic Aspects: Computational Aspects:


• Aesthetic Decisions: The choice of colors, textures, lighting, and • Algorithmic Foundations: The process of rendering relies on
camera angles are driven by the artistic vision of the creator. For complex algorithms such as rasterization, ray tracing, and path
example, lighting can dramatically change the mood of a tracing, which involve heavy mathematical computations.
scene, and how materials are rendered (e.g., glossiness, These algorithms dictate how pixels are generated from 3D
transparency) influences the overall look and feel. models based on factors like lighting, camera position, and
• Storytelling: In movies, games, and virtual environments, material properties.
rendering helps to convey a story or emotion. The use of • Optimization: Ensuring the rendering process runs efficiently
rendering effects such as depth of field or motion blur can add involves computational strategies like culling (removing objects
to the narrative experience by mimicking real-world visual cues not visible to the camera), LOD (adjusting detail level of objects
or drawing focus to specific elements. based on their distance), and parallel computation (using GPUs
• Style: Artists might choose to render a scene in a stylized manner for faster processing). This balancing act is crucial for
(e.g., cel shading for a cartoon-like look or highly abstract applications like real-time games, where computational
shapes) which impacts the overall visual impression, diverging resources are limited, but the desire for visual fidelity remains
from realistic representations for artistic expression. high.
• Hardware Considerations: The ability to render effectively often
depends on understanding hardware capabilities (e.g., GPU
shaders, memory management) to make the most efficient use
of available resources while achieving the desired artistic goals.
Reference Model 6

Reference model refers to a conceptual


framework or standardized structure that defines
the stages and processes involved in rendering a
scene from geometric data to a final image on
the screen.
It breaks down the complex process of
generating images into distinct, well-defined
stages, helping developers and hardware
designers understand and implement the
rendering pipeline efficiently.
Why Reference Model? 7

Importance in rendering:
• Clarifies Process Flow: The reference model acts
 A reference model in computer as a blueprint, helping to break down the
graphics defines a standardized rendering process into manageable and logical
framework for understanding and stages.
implementing the rendering • Promotes Modularity: Each stage can be
pipeline. independently developed and optimized,
 It outlines the main stages and supporting modular software and hardware
data flow involved in transforming design.
a 3D scene into a 2D image. • Ensures Consistency: By following a well-defined
reference model, different systems (e.g., graphics
APIs like OpenGL or Direct3D) produce consistent
rendering behavior.
• Enables Optimization: Identifying where
performance bottlenecks occur (e.g., geometry
processing or pixel shading) becomes easier when
the stages are clearly defined.
How does the reference model help both 8
software developers and hardware designers
in implementing rendering systems?
For Software Developers: For Hardware Designers:
• Guides Implementation: It provides a roadmap for • Architecture Planning: Helps design GPUs
building rendering engines and graphics that mirror the logical structure of the
applications, with clearly defined input-output rendering pipeline—dedicating specific
relations at each stage. units to vertex processing, rasterization,
• API Design and Usage: Libraries and APIs (e.g., pixel shading, etc.
OpenGL, Vulkan) are structured based on the • Parallelization Opportunities: Clarifies
reference model, helping developers to use these independent operations that can be
tools effectively. parallelized (e.g., per-vertex or per-pixel
• Debugging and Optimization: Understanding the calculations), which is crucial for GPU
model helps in isolating bugs or performance issues performance.
in specific pipeline stages (e.g., vertex • Resource Allocation: Informs decisions on
transformation, rasterization). memory usage, register allocation, and
• Shader Programming: The reference model defines processing power distribution across
where shaders (vertex, fragment/pixel, geometry) pipeline stages.
operate, enabling precise control over rendering
behavior.
Rendering Process Vs. Reference 9
Model
Feature Rendering Process Reference Model

Definition Practical execution steps Idealized theoretical behavior

Specification-defined
Scope Implementation-specific
(OpenGL spec)

Developers and hardware


Target API designers, testers
vendors

Optimized for real-time


Speed / Optimization Not performance-focused
performance

Real vs Ideal Real execution on GPU/driver Imagined perfect behavior

Usage Game engines, simulations API validation, documentation


Related Questions? 10

1. Do modern GPUs strictly follow the reference model stages, or are


there variations?

2. How could the reference model evolve to accommodate


advancements in ray tracing and machine learning-based
rendering techniques?

3. To what extent can the reference model be applied to non-


traditional rendering systems, such as neural rendering or
volumetric displays?

4. What challenges might arise when mapping the reference model


to parallel or distributed rendering systems?
Application Program Interface 11
(API)
An application program interface (API) is a
standard collection of functions to perform a set
of related operations, and a graphics API is a set
of functions that perform basic operations such
as drawing images and 3D surfaces into windows
on the screen.
Every graphics program needs to be able to use
two related APIs:
1. a graphics API for visual output and
2. a user-interface API to get input from the user.
The Open Graphics Library 12
(OpenGL)
 The Open Graphics Library (OpenGL) is a graphics
programming interface that has become very
widespread in recent decades due to its open concept
and platform independence.
 Drivers
of common graphics processors and graphics
cards for the major operating systems support the
OpenGL.
 The programming interface Open Graphics Library
(OpenGL) was specified by the industry consortium
Khronos Group and is maintained as an open standard.
13
The Open Graphics Library 14
(OpenGL)
OpenGL specifies only an interface. This
interface is implemented by the manufacturer of
the graphics hardware and is supplied by the
graphics driver together with the hardware.
OpenGL drivers are available for the common
operating systems Windows, macOS and Linux.
OpenGL for Embedded Systems (OpenGL ES) is a
subset of (desktop) OpenGL and provides
specific support for mobile phones and other
embedded systems.
The Open Graphics Library 15
(OpenGL)
 The Android mobile operating system, for example, offers OpenGL ES support.
Furthermore, modern browsers support WebGL (Web Graphics Library), which is a
special specification for the hardware-supported display of three dimensional
graphics for web applications.
chrome://gpu/
 Graphics programming has a reputation for being among the most challenging
computer science topics to learn.
WebGL (Web Graphics Library)Target: OpenGL ES (OpenGL for Embedded Systems)
• Browsers (Chrome, Firefox, Safari, Edge). • Target: Mobile devices (Android, iOS), embedded
• Based On: OpenGL ES 2.0 (WebGL 1.0) and ES 3.0 (WebGL systems (Raspberry Pi, smart TVs).
2.0). • Versions: ES 1.0 (fixed pipeline), ES 2.0+ (programmable
• Usage: Interactive 3D websites, online games, pipeline).
educational tools. • Usage: Mobile games, AR/VR apps, real-time 3D
• Languages: JavaScript + GLSL. rendering.
• Runs in: HTML5 <canvas> using browser-provided GPU
access.
16
float vertices[] = {
0.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
17
1.0f, -1.0f, 0.0f
};

GLuint VBO, VAO;


glBegin(GL_TRIANGLES);
glGenVertexArrays(1, &VAO);
glVertex3f(0.0f, 1.0f, 0.0f);
glGenBuffers(1, &VBO);
glVertex3f(-1.0f, -1.0f, 0.0f);
glVertex3f(1.0f, -1.0f, 0.0f);
glBindVertexArray(VAO);
glEnd();
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices,
GL_STATIC_DRAW);

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 *


Use Legacy OpenGL for quick prototypes, sizeof(float), (void*)0);
educational demos, or classical-systems. glEnableVertexAttribArray(0);

Use Modern OpenGL for serious, efficient, // In render loop


or future-proof application. glUseProgram(shaderProgram);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
GRAPHICS PIPELINE
Fixed-function pipeline (left) and programmable pipeline (right)
18
Tessellation, in computer graphics, refers to the process of
subdividing a geometric primitive (called a "patch" in OpenGL)
into smaller, simpler primitives (like triangles or lines).

Feature Reference Model Graphics Pipeline

Abstraction
High-level conceptual Low-level implementation
Level

Understand rendering
Purpose Execute rendering tasks
logic

API
None (theoretical) API & GPU specific
Dependency

Flexibility General & universal Follows strict hardware stages

Describes "what
Customization Defines "how it happens" (e.g. shaders)
happens"

Layer Role
Sends draw commands, data, and handles
Application
logic
OpenGL API Interface to access rendering functionality
OpenGL Driver Converts API calls into GPU instructions
GPU (Hardware) Executes rendering tasks
Graphics Pipeline Processes geometry to pixels
Polynomial Evaluator :-This stage is

OpenGL Architecture
responsible for evaluating mathematical
functions like curves and surfaces using 19
polynomial equations. It processes data that
represents curves or surfaces in an efficient
way before passing it to the next stage.

Vertex operations involve transforming and


The CPU initiates the lighting each vertex. This is where individual
rendering process by issuing vertices are transformed into screen coordinates,
commands to the OpenGL and lighting calculations are applied. Primitive
pipeline. It communicates with assembly organizes the transformed vertices
the GPU to perform rendering into geometric shapes like triangles, lines. ..
tasks.
It stores the output image
(the collection of pixels)
The display list stores precompiled
Depth (Z-buffer), that is displayed on the
OpenGL commands for reuse,
stencil, and blending screen.
improving performance by avoiding operations.
repetitive processing of static
objects.

Rasterization stage converts the geometric primitives (like


Stores texture data (images that can be applied to the surface of 3D objects).
During Rasterization, texture mapping can be applied to fragments.
triangles) into fragments (potential pixels). It determines which
pixels on the screen correspond to the primitives.
The Role of OpenGL in the 20
Reference Model
The reference model in OpenGL (a common
graphics API) defines how data flows from the
CPU through the GPU to produce images.
It includes stages like:
1. Vertex Transformation (converting 3D coordinates into 2D screen
space – set of vertices),
2. Primitive Assembly (defining shapes like triangles),
3. Rasterization (converting shapes into pixels),
4. Fragment Shading (coloring pixels), and
5. Writing to the Frame Buffer (storing the final image for display).
21
Vendor OpenGL Support Driver Stack (Platform) Notes
Best compatibility and
NVIDIA ✅ Excellent NVIDIA Driver (Windows, Linux) performance; supports latest
OpenGL versions.
Sometimes slower driver
AMD Adrenalin (Windows),
AMD (Radeon) ✅ Good updates or bugs in newer
AMDGPU (Linux)
OpenGL features.
Good for integrated GPUs,
Intel ✅ Moderate Intel Graphics Driver (iGPU) but limited support for
advanced features.
Apple deprecated OpenGL
macOS OpenGL (up to 4.1
Apple ⚠️ Deprecated in favor of Metal (post macOS
only)
Mojave).
Supports OpenGL ES for
Qualcomm (Adreno) ✅ (ES only) Android drivers
mobile GPUs.
Widely used in mobile SoCs,
ARM (Mali) ✅ (ES only) Mali Driver (Android, Linux)
OpenGL ES only.
Used in some Focuses on OpenGL ES, often
Imagination (PowerVR) ✅ (ES only)
embedded/mobile devices in constrained environments.
OpenGL Generations 22

Generation Key Change Example Feature

OpenGL 1.x Fixed-function pipeline glBegin(), glVertex3f()

OpenGL 2.x Introduced shaders GLSL, glShaderSource()

OpenGL 3.x Modern pipeline, deprecated legacy VAO, VBO, UBO

OpenGL 4.x Advanced GPU & compute features Tessellation, compute shaders

OpenGL ES Embedded devices Mobile-focused OpenGL subset

WebGL JavaScript/Web browser rendering Canvas 3D via OpenGL ES


Coordinate Systems 23

Coordinate Systems are crucial for defining


positions, transformations, and orientations of
objects in a 2D or 3D space.
3D space is generally represented with three
axes: X, Y, and Z. The three axes can be
arranged into two configurations, right-handed
or left-handed. For example, the
majority of coordinate
systems in OpenGL are
right-handed, whereas in
Direct3D the majority
are left-handed.
24
OpenGL is a right-handed system 25

By convention, OpenGL is a right-


handed system. What this basically says
is that the positive x-axis is to your right,
the positive y-axis is up and the positive
z-axis is backwards.
Think of your screen being the center of
the 3 axes and the positive z-axis going
through your screen towards you. The
axes are drawn as follows.
To understand why it's called 26
right-handed do the following:
1. Stretch your right-arm along the positive y-axis
with your hand up top.
2. Let your thumb point to the right.
3. Let your pointing finger point up.
4. Now bend you’re the other three fingers
downwards 90 degrees.
Coordinate Systems 27
 To map the real world to a virtual scene or to interact with objects
efficiently.
1. Object or Local Coordinate System (OCS) - Local space of an object.
2. World Coordinate System (WCS) - Global scene space.
3. View or Camera Coordinate System (VCS) - Camera’s viewpoint.
4. Clip Coordinate System - After projection and before clipping.
5. Normalized Device Coordinates (NDC) - Standardized space [-1, 1] for rendering
6. Window or Screen Coordinate System - Pixel space for display rendering
7. Eye Space or Camera Space - Space relative to the camera for lighting.
8. Texture Coordinate System - Mapping 2D textures to 3D surfaces using u, v values.

Why? When modifying your object it makes most sense to do this in local
space, while calculating certain operations on the object with respect
to the position of other objects makes most sense in world coordinates
and so on.
Real World Analogy 28
29
Coordinate System Description Comparison Attributes Example

Object/Local Coordinate System - Defines the local space of an - **Position**: Relative to the A chair has its own local
(OCS) object, independent of other object's local origin. coordinates, where the seat might
objects in the scene. - **Transformation**: Only affects be centered at (0, 0, 0). Any
- Position and orientation are the object. modifications, like rotation or
relative to the object's own origin. - **Usage**: Modeling individual scaling, are applied locally.
objects.
World Coordinate System (WCS) - Represents the entire 3D scene - **Position**: Relative to the A city model where buildings,
as seen from a global global origin. cars, and roads are positioned
perspective. - **Transformation**: Applies to all based on the overall scene, such
- Objects are placed and objects in the world. as placing a building at (50, 20, 0)
transformed relative to a global - **Usage**: Placing objects in a relative to the global origin.
origin (0, 0, 0). scene.
View/Camera Coordinate System - The space relative to the - **Position**: Objects are A first-person camera view in a
(VCS) camera or viewer. transformed based on camera game, where the camera is at the
- Defines how objects are placement. origin, and all objects are viewed
transformed based on the - **Transformation**: Camera- from this point, facing along the
camera's position and orientation. centric. negative Z-axis.
- **Usage**: Rendering scenes
from the camera’s perspective.

Clip Coordinate System - The system after applying - **Position**: Affected by A scene where only objects within
projection transformations. projection (perspective or the view frustum are rendered,
- Coordinates are in a range orthographic). and objects outside this range are
before being clipped by the - **Transformation**: Pre-clipping, clipped off and not drawn.
viewing frustum. defines visible regions.
- **Usage**: Clipping unnecessary
parts.
Normalized Device Coordinates - A normalized space where - **Position**: Normalized between After projection, an object might
(NDC) coordinates are in a range of [-1, [-1, 1]. have coordinates (0.3, 0.2, 0.8) in
1] for all axes.
- Helps in standardizing object
- **Transformation**: Post-
projection. 30
NDC, meaning it's within the
visible range of the screen and
placement for rendering - **Usage**: Maps visible parts to a ready for rasterization.
regardless of device standardized unit cube for
specifications. rendering.
Window/Screen Coordinate - The final coordinate system used - **Position**: Mapped to pixel A 3D object’s (0.5, 0.5) position in
System for rendering. space. NDC could be mapped to (960,
- Coordinates are mapped to - **Transformation**: From NDC to 540) in screen coordinates,
pixel positions on the physical pixel coordinates. placing it at the center of a
screen, such as in a 1920x1080 - **Usage**: Rendering on a 1920x1080 display.
resolution. specific display or screen
resolution.
Eye Space or Camera Space - A space where all coordinates - **Position**: Relative to the In a 3D scene, lighting
are transformed relative to the camera's origin. calculations are done in eye
camera’s position. - **Transformation**: Used for space to determine how light
- Used in lighting and shading lighting/shading. reflects based on the object’s
calculations. - **Usage**: Calculations of position relative to the camera
lighting effects like specular or (eye).
diffuse.
Texture Coordinate System - A 2D system used to map - **Position**: Relative to the A 512x512 texture is mapped to a
textures to objects. texture. cube. The texture coordinates (0,
- Coordinates are typically - **Transformation**: Applied to 0) might represent the bottom-left
expressed in terms of **u** and textures before rendering. corner, and (1, 1) the top-right
**v**. - **Usage**: Texture mapping on corner of the texture.
objects (2D to 3D).
31
Coordinate Systems and 32
Transformations
Steps in Forming an Image
specify geometry (world coordinates)
specify camera (camera coordinates)
project (window coordinates)
map to viewport (screen coordinates)
Each step uses transformations
Every transformation is equivalent to a change in
coordinate systems
Synthetic Camera 33

A synthetic camera is a virtual representation of a


camera used in computer graphics to simulate the way a
physical camera captures a scene.
 Itplays a crucial role in rendering, determining what part
of the 3D world will be projected onto the 2D screen,
similar to how a real camera captures images from the
physical world.
 In a video game, a synthetic camera would determine
what the player sees on the screen. For example, in a
first-person shooter, the camera is placed at the player’s
eye level and moves as the player moves through the
game world.
Attribute Synthetic Camera Real Camera 34

Position and Orientation Defined Defined by physical


programmatically location and angle

Field of View Adjustable via code Limited by the camera


lens

Projection Type Can be set to Always perspective


perspective or
orthographic
Clipping Planes Set programmatically Determined by focus
and lens quality
Key Components of a Synthetic 35
Camera:
 Position:The location of the camera in the 3D scene
(usually defined in the World Coordinate System).
 Orientation:
The direction the camera is facing, which is
often controlled by parameters like pitch, yaw, and roll.
 Field
of View (FOV): The angle that defines how wide or
narrow the view from the camera is. A larger FOV
captures more of the scene, while a smaller FOV creates
a zoomed-in effect.
Key Components of a Synthetic 36
Camera:
 Near and Far Clipping Planes: Defines the range
within which objects are rendered. Anything
closer than the near plane or farther than the far
plane is not rendered.
 Aspect Ratio: The width-to-height ratio of the
camera's view, which affects how the image is
stretched or compressed horizontally or vertically.
 Projection Type:
 Perspective Projection: Objects farther from the camera appear
smaller, simulating depth perception, as in real-world cameras.
 Orthographic Projection: Objects are the same size regardless of their
distance from the camera, often used for technical or engineering
drawings where scale consistency is important.
Projection Type 37
38
39
OpenGL Camera 40

 OpenGL includes a camera that is permanently fixed at


the origin (0,0,0) and faces down the negative Z-axis, as
shown here.
 In order to use the OpenGL camera, one of the things
we need to do is simulate moving it to some desired
location and orientation.

Let’s say you want the camera to:


• Be located at (x,y,z)(x, y, z)(x,y,z),
• Look at a point (cx,cy,cz)(cx, cy, cz)(cx,cy,cz),
• Have an "up" direction (e.g., the Y-axis: (0,1,0)(0,1,0)(0,1,0)).
gluLookAt(
0.0, 0.0, 5.0, // Eye/camera position
0.0, 0.0, 0.0, // Center the camera is looking at
0.0, 1.0, 0.0 // Up vector (defines camera's tilt)
);
Related APIs 41

AGL(Apple GL - macOS and iOS), GLX (Linux-X),


WGL (Windows)
Platform-specific
interfaces or libraries used for creating
and managing OpenGL - glue between OpenGL and
windowing systems
GLU (OpenGL Utility Library)
part of OpenGL - tessellators, quadric shapes, etc
GLUT (OpenGL Utility Toolkit)
Portable windowing API
Not officially part of OpenGL
OpenGL and Related APIs 42
Attribute AGL GLX WGL 43
Platform macOS (legacy) Linux (with X Window Windows
System)

Primary Purpose OpenGL context OpenGL context OpenGL context


creation for MacOS creation for Linux creation for Windows
(X11)
Status Deprecated Actively used in Linux Actively used on
(replaced by Metal) (X11-based) Windows

Integration macOS windowing X Window System Windows GDI


and display system (Graphics Device
Interface)
Example Use Case macOS 3D graphics Rendering 3D Windows OpenGL
in older apps content in Linux apps games and
applications
44
Preliminaries 45

 #include <GL/glut.h>: Includes the GLUT library, which


provides functions for creating OpenGL windows,
handling events, and rendering graphics.
#include <GL/glut.h>
 OpenGL defines numerous types for compatibility
– GLfloat, GLint, GLenum, etc.
GLUT Basics 46

 Application Structure
 Configure and open window
 Initialize OpenGL state
 Register input callback functions
 render

 resize

 input: keyboard, mouse, etc.


 Enter event processing loop
Sample Program 47

#include <GL/glut.h> int main(int argc, char** argv)


{
void display() glutInit(&argc, argv);
{ glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer glutInitWindowSize(640, 480);

glColor3f(1.0f, 0.0f, 0.0f); // Set color to red glutCreateWindow("OpenGL Triangle");

glBegin(GL_TRIANGLES); glutDisplayFunc(display);
glutMainLoop();
glVertex2f(-0.5f, -0.5f);
return 0;
glVertex2f(0.5f, -0.5f);
}
glVertex2f(0.0f, 0.5f);
glEnd();
glFlush(); // Render the scene
}
Code Description 48

 glClear(GL_COLOR_BUFFER_BIT): Clears the color buffer to


a black background.
 glColor3f(1.0f, 0.0f, 0.0f): Sets the current color to red.

 glBegin(GL_TRIANGLES): Starts drawing a triangle.


 glVertex2f(-0.5f, -0.5f), glVertex2f(0.5f, -0.5f),
 glVertex2f(0.0f, 0.5f): Defines the vertices of the triangle.
 glEnd(): Ends drawing the triangle.
 glFlush(): Forces the rendering of the scene.
Code Description 49

glutInit(&argc,argv): Initializes GLUT.


glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB):
Sets the display mode to single buffering and
RGB color.
glutInitWindowSize(640, 480): Sets the window
size to 640x480 pixels.
glutCreateWindow("OpenGL Triangle"): Creates
an OpenGL window with the title "OpenGL
Triangle".
Code Description 50

glutDisplayFunc(display): Sets the display


callback function to display().
glutMainLoop(): Starts the GLUT event loop,
which handles window events and calls the
display() function when necessary.
Setting up the Environment 51

Visual Studio 2022:


Download and install from the official website
(https://visualstudio.microsoft.com/) .
Run the installer and select the “Desktop development with
C++” workload.
Click “Install” and wait for the installation to complete.
GLUT (OpenGL Utility Toolkit): Download the GLUT library.
Download GLUT: Go to the GLUT website
(https://www.opengl.org/resources/libraries/glut/glutdlls37beta.zip )
and download glutdlls37beta.zip file.
52
53
Copy GLUT Files 54

After downloading and extracting GLUT, you


need to copy the files to the appropriate
directories as instructed below:
1. Copy glut.h:
Navigate to the include folder in your GLUT download
Copy glut.h to:

Note: Please create a folder named “GL” inside the include folder if
it does not already exist and copy glut.h to this folder.
2. Copy glut.lib, glut32.lib 55

Navigate to the lib folder in your GLUT download.


Copy glut.lib to:

Copy glut32.lib to:


Copy glut.dll and glut32.dll: 56

Navigate to the bin folder in your GLUT


download.
Copy glut.dll and glut32.dll to

SysWOW64 is a folder that is present exclusively on 64-bit operating systems. It’s located
under C:\Windows, and it contains the 32-bit file components and resources that the
operating system needs.
When we compare the SysWOW64 and System32 folders, they almost have the same structure.
That’s because SysWOW64 is similar to a 32-bit System32 on a 64-bit operating system.
Copy glut32.dll: 57

Copy glut32.dll:
From the same bin folder, copy glut32.dll to:

A .dll file is a dynamic link library, which is a collection of code and data that
other programs can use to perform tasks.
Configure Visual Studio Project 58
Create a New Project: 59

1. Open Visual Studio 2022.


2. Click on “Create a new project”.
3. Select “Empty Project” under the “C++”
category.
4. Name your project and click “Create”.
Link the Libraries: 60

Rightclick on Project > Properties


Under “Configuration Properties”, go to “Linker” >
“Input”.
Add the following to the “Additional
Dependencies” field:
61
Add Source Code 62

Right-click on the “Source Files” folder in the


Solution Explorer.
Select “Add” > “New Item…”.
Choose “C++ File (.cpp)” and name it main.cpp.
Write Your OpenGL Code:(main.cpp)
DEBUG: Go to “Build” > “Build Solution” or press
Ctrl+Shift+B.
RUN: Go to “Debug” > “Start Without Debugging” or
press Ctrl+F5.
Troubleshooting 63

Linker Errors: Ensure the library paths are correctly


set in the project properties.
Runtime Errors: Ensure the GLUT DLLs (glut32.dll)
are setup properly in the system path.
Congratulations! You’ve successfully set up
OpenGL with GLUT in Visual Studio 2022
Geometric Primitives 64

Geometric primitives like triangles, squares, and


circles are the building blocks of more complex
shapes and objects in OpenGL.
Understanding how to draw and manipulate
these basic shapes is a crucial step in learning
OpenGL.
OpenGL Command Formats 65
66
OpenGL Geometric Primitives 67
All geometric primitives are specified by vertices
In the vertex conception, a vertex list will define the border of the “geometric
object”, which we call primitive, such as: Triangles, Lines, points, etc…

Vertex is simply a point in geometry.


Specifying Geometric Primitives 68

Primitives are specified using void drawRhombus( GLfloat color[] )


glBegin( primType ); {
glEnd(); glBegin( GL_QUADS );
PrimType determines how vertices glColor3fv( color );

are combined: glVertex2f( 0.0, 0.0 );


glVertex2f( 1.0, 0.0 );
GLfloat red, greed, blue;
glVertex2f( 1.5, 1.118 );
Glfloat coords[3]; glVertex2f( 0.5, 1.118 );
glBegin( primType ); glEnd();
for (i =0;i <nVerts; ++i ) { }
glColor3f( red, green, blue );
glVertex3fv( coords );
}
glEnd();
Initializing OpenGL 69
This code sets up a
simple window using
GLUT and prepares
OpenGL for rendering.
The display function
will be where we place
our drawing code.
Code Description 70
Drawing a Triangle 71

Now that we have a


window, let’s draw a
triangle. In the fixed-
function pipeline, drawing
is straightforward and
doesn’t require shaders.
We’ll specify the vertices
directly using glBegin and
glEnd.

To draw a square, we’ll use the GL_QUADS primitive, which


allows us to define a quadrilateral using four vertices.
72
73
74
Primitives Attributes 75
Attributes determine how an output primitive appears
when rendered on the screen. These include properties like
color, width, pattern, and shading.

Purpose Function Example


Position glVertex3f(x, y, z)
Color glColor3f(r, g, b)
Texture Coord glTexCoord2f(u, v)
Normal Vector glNormal3f(x, y, z)
Point Size glPointSize(size)
Line Width glLineWidth(width)
76
77
Feature OpenGL (Desktop) OpenGL ES (Mobile) WebGL (Browser)
Platform Windows/Linux/macOS Android/iOS/RPi All major browsers
API Level Full Stripped-down ES-like via JS
Shader Language GLSL GLSL ES GLSL ES (in JS)
Performance High Optimized for mobile Varies by browser
Use Cases Games, CAD Mobile apps, AR 3D Web, eLearning

Platform Language API Used


Web JavaScript WebGL
Desktop C++ OpenGL 3.3
Mobile Java/Android OpenGL ES 2.0

End of Chapter 3

You might also like