KEMBAR78
Photogrammetry As An Engineering Design Tool | PDF | Aperture | 3 D Computer Graphics
0% found this document useful (0 votes)
25 views24 pages

Photogrammetry As An Engineering Design Tool

Uploaded by

jelen.lopez20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views24 pages

Photogrammetry As An Engineering Design Tool

Uploaded by

jelen.lopez20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter

Photogrammetry as an
Engineering Design Tool
Ana Pilar Valerga Puerta, Rocio Aletheia Jimenez-Rodriguez,
Sergio Fernandez-Vidal and Severo Raul Fernandez-Vidal

Abstract

Photogrammetry is a technique used for studying and precisely defining the


shape, dimension, and position in space of any object, using mainly measurements
taken over one or more photographs of that object. Today, photogrammetry is a
popular science due to its ease of application, low cost, and good results. Based on
these causes, it is becoming a good alternative to scanning. This has led to its
implementation in different sectors such as the archeological, architectural, and
topographical for application in element reconstructions, cartography, or biome-
chanics. This chapter presents the fundamental aspects of this technology, as well as
its great possibilities of application in the engineering field.

Keywords: 3D scan, reverse engineering, 3D design, point cloud, CAD, virtual


model, 3D reconstruction, virtual assembly, augmented reality, virtual reality

1. Reverse engineering

Reverse engineering is based on the study of certain principles and information


of a product. The main function of reverse engineering is to obtain the maximum
information about an element or device, including its geometry and appearance,
among other things [1, 2]. Its first appearance was around World War II, in military
operations.
The field of application of this type of engineering is very wide, highlighting the
3D digitalization used mainly for research, analysis, and reasoning of the technology
used by other companies, for the development of elements without making use of
specific information (redesign), and for the tasks of inspection or virtual metrology
of a product in almost every industry [3].
The main 3D digitization technologies are shown in Figure 1, among which
photogrammetry stands out for its ease of use and low cost.

2. Photogrammetry

Photogrammetry is distinguished by the measurement on photographs, allowing


to obtain from any object its real dimensions, position, shape, and textures [4, 5].
These processes or this science emerged in the middle of the nineteenth century,
being as old as photography. The first photogrammetric device and the first meth-
odology were created in 1849 by the Frenchman Aimé Laussedat. He, “the father of

1
Product Design

Figure 1.
Classification of 3D scanning technologies.

photogrammetry,” used terrestrial photographs and compiled a topographic map.


This method was known as iconometry, which means the art of finding the size of
an object by measuring its image. Digital photogrammetry was born in the 1980s,
having as a great innovation the use of digital images as a primary data source [6, 7].
The main phases of digital photogrammetry are analysis of the shape of the
object and planning of the photos needed to be taken; calibration of the camera;
image processing with specific software to generate a cloud of points; and transfer
of this point cloud to the CAD software to create a 3D model. The accuracy of the
reconstruction depends on the quality of the images and textures. Photogrammetry
algorithms typically indicate the problem, such as minimizing the sum of the
squares of a set of errors, known as “package fit” [8]. Structure algorithms, from
motion (SfM), can find a set of 3D points (P), a rotation (R), and the camera
position (t), given a set of images of a static scene with 2D points in correspon-
dence, as shown in Figure 2 [10].

Figure 2.
Structure of the motion algorithm [9].

2
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Photogrammetric technology is generally based on the illumination of one object


and the inclusion of solutions derived from the measurement of conjugated points,
appearing in two photographic images or measuring the conjunction of points in
multiple photographic images (three or more images). There are different photo-
grammetric techniques. One of them is to ensure that the surface of the object has
enough light and optical texture to allow conjugated dots to be paired through two
or more images. In some cases, optical texture can be achieved by projecting a
pattern over the surface of the object at the time of image capture [11–13].

3. Fundamentals of photogrammetry

The basic mathematical equations underlying photogrammetry, called


collinearity equations, are responsible for unifying the coordinate system of the
image in the camera with the object being photographed [14] (Eqs. (1)–(3)):
0 1 0 1
xn  x0 Xn  X0
B C B C
@ yn  y0 A ¼ λM@ Y n  Y 0 A (1)
c Zn  Z0

where λ = scaling factor; M = rotation matrix; Xo, Yo, and Zo = the position of the
perspective center in the object’s space; and pn = (xn, yn)T and Pn = (Xn, Yn,
Zn)T = target n coordinates at the image plane and the space of the object, respec-
tively. The above equation manipulated algebraically produces the well-known
collinearity equations that relate the location of destination nth in the space of
objects with the corresponding point in the plane of the image:

m11 ðX n  X 0 Þ þ m12 ðY n  Y 0 Þ þ m13 ðZ n  Z 0 Þ


xn  x0 ¼ c (2)
m31 ðX n  X 0 Þ þ m32 ðY n  Y 0 Þ þ m33 ðZ n  Z 0 Þ
m21 ðX n  X 0 Þ þ m22 ðY n  Y 0 Þ þ m23 ðZ n  Z 0 Þ
yn  y0 ¼ c (3)
m31 ðX n  X 0 Þ þ m32 ðY n  Y 0 Þ þ m33 ðZ n  Z 0 Þ

where mij (i, j = 1, 2, 3) = elements of the rotation matrix M which are functions
of the Euler orientation angles (ω, ф, к), which are essentially the angles of tilt,
rotation and rotation of the camera in the object space (Eq. (4)–(11))

m11 ¼ cos ф cos к (4)


m12 ¼ sin ω sin ф cos к þ cos ω sin к (5)
m13 ¼  cos ω sin ф cos к þ sin ω sin к (6)
m21 ¼  cos ф sin к (7)
m22 ¼  sin ω sin ф sin к þ cos ω cos к (8)
m23 ¼ cos ω sin ф sin к þ sin ω cos к (9)
m31 ¼ sin ф (10)
m32 ¼  sin ω cos ф (11)
m33 ¼ cos ω cos ф (12)

The plane of the image can be transformed analytically into its X, Y, and Z
coordinates in global space. Photogrammetry is effective and computationally sim-
ple. It should be noted that its algorithm is based on definitions of both interior and
exterior orientations. In a photographic system, if the internal parameters of a
3
Product Design

camera are known, any spatial point can be fixed by the intersection of two beams
of light that are projected.
There are two main factors that induce photogrammetry measurement errors:
System error due to lens distortion and random error due to human factors.

1. System error due to lens distortion. It causes a point in the image in the plane
to move from its true position (x, y) to a disturbed position. The coordinates of
any point in the image can be compensated with Eqs. (13)–(14):

x´n ¼ xn þ dx (13)

y´n ¼ yn þ dy (14)

In the lens, the largest error occurs at the point of the projected image. There-
fore, dx, dy can be broken down by Eqs. (15)–(16):

dx ¼ dxr þ dxd (15)

dy ¼ dyr þ dyd (16)

2. Random error due to human factors. Theoretically, a point captured in two


different photos is enough to set its 3D coordinates. To complete this, this step
requires an identification and marking of the point in the two images. Any human
can have failures in the marking of points, giving rise to the random error.

4. Evolution from analytical to digital photogrammetry

From the analytical photogrammetry, it is possible to describe the evolution


from photogrammetry to digital, based on physical and mathematical principles.
The main distinction is given by the nature of the measurement of the information
taken in the images [15].
The analytical photogrammetry coordinates the image, and the gray digital
image is evaluated with the digital photogrammetry. In both methods appropriate
Gaussian-Markov evaluation procedures are used. Pertinent relations between
object space models and image space data are obtainable. Radiometric concerns take
a more important role than previously. The data evaluation of the gray value of the
digital image is no longer based on the digital image correlation. As an alternative,
the gray values of an image are projected directly onto the models in the object
space, this being a new principle. However, these numerical procedures in digital
photogrammetry need to be stabilized by adjustment methods. Thus, the original
concept of digital photogrammetry can be pragmatic to images from any sensor.
Considerable advances in digital photogrammetry have been made in recent
years due to the availability of new hardware and software, such as image
processing workstations and increased storage capacity [16, 17].

5. Device and acquisition characteristics

The main camera and photography parameters are focal length, focal point, bias,
distortion, and pixel error; they will allow more accurate calibration [18] and are
shown in Figure 3.

4
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Figure 3.
Scheme of operation of a camera objective.

5.1 Camera objective

Included in the optical part of the camera, it is in charge in projecting the image
that crosses it on the same plane and in outstanding conditions of sharpness.
Therefore, it is a matter of focusing on the objects that are at equal distance on the
focal plane. From certain distance, all the objects will be projected on the same
plane. The light points are transmitted to an element that composes the scenario. As
a result of diffraction, this is shown as a circular point with a halo around it and
concentric rings, named Airy discs. Suppressing them is unfeasible because it is a
physical light effect. Even so, it would be desirable for such rings to be as diffuse
and thin as possible [17, 19].
Its resolving capacity depends on two parameters: aberrations and diffraction.
One of the main functions of the objective is to suppress aberrations. When the
diaphragm is closed, the aberrations are placated, and the only limiting factor is
diffraction. When the diaphragm is opened, diffraction diminishes its significance
in the wake of aberrations, which add up to force [20].

5.2 Focal length

This parameter is measured from the optical center of the lens to the focal plane,
when the camera focused toward the infinity [5, 21]. Normal lenses are those which
have a distance close to the diagonal of the cliché. The representation of the focal
length is shown in Figure 4.

5.3 Relative aperture

Relative aperture (Ab) is the connection of the lens diameter (D) and its focal
length (f) (Eq. (17)).
It is shown by the denominator, known as brightness or “f-number.” In a
different way, the aperture is the span through which light enters to be captured by
the sensor. The more spacious the opening will be, the more light will enter the
sensor as the number becomes smaller [4, 7]:

5
Product Design

D
Ab ¼ (17)
f

5.4 Field angle

This is the viewing angle of the camera and is closely related to the focal length
and dimension of the sensor [8, 22]. A schematic representation is proposed in
Figure 5.

5.5 Shutter

It is a mechanism that keeps the light passing through the lens into the closed
camera. At certain intervals of time, it has the ability to open, allowing the passage
of light so that the film can be impressed. The opening time can be set [21].

5.6 Focus depth

It is related to the permissiveness that occurred between obtaining a sharp image


with a suitable impression and another less adequate exposure, although also pro-
ducing a sharp image. Depth of focus is altered by lens magnification and numerical

Figure 4.
Representation and focal length types on a camera.

Figure 5.
Focal distances and corresponding angles.

6
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

aperture, and under some pretexts, large aperture systems have more pronounced
depths of focus than low aperture systems, even if the depth of field is small [19].

5.7 Depth of field

Depth of field is the area of sharp reproduction seen in the photograph. In this
one, there are some objects observed which are located at a certain distance, as well
as others more distant or adjacent to them [20].

5.8 Sensor

Its function is to modify the light received in order to obtain a digital systemati-
zation. The sensor is called pixel in its minimum element. A digital image consists of
a set of pixels. The technology based on complementary metal oxide semiconductor
(CMOS) sensors is the most applied. The sensors consist of a semiconductor and
sensitive material in the visible spectrum, between 300 and 1000 nm [10]. Charge-
coupled device (CCD) sensors are becoming obsolete due to the cost and speed of
processing images.
The comparison reading of the information in the CMOS sensors has the advan-
tage of obtaining enough captures, obtaining readings using less time and with
greater flexibility. Using a high dynamic range of work, high contrasts and a correct
display of objects are achieved. In terms of quality, the physical size of the sensor is
more significant than the number of cells or resolution. A large unit may allow
higher-quality photographs to be taken than another sensor with a higher resolution
but with a smaller surface [23].
As far as color is concerned, it must be seen that color is just a human visual
perception. In order to be able to glimpse the color of an object, it is necessary to
have a light source and something that reflects this light. A color is represented in
digital format by applying a system of representation. The most commonly used is
the RGB system. To represent a color, the exact percentages of primary red, pri-
mary green, and primary blue (RGB, red, green, blue) must be available. By this
way, the color is displayed through the implementation of three numbers [24].

5.9 Diaphragm

The function of this element is to enlarge or decrease the percentage of light


circulating through the target. The diaphragm aperture is related to the percentage
of aperture it has. It is counted in f-numbers. The step is the shift from one value
to the next. The ratio of luminosity, according to the scale of the f, does it in a
factor of 2 [5] (Figure 6).

5.10 Other aspects to be taken into account

5.10.1 Focus

The first step in taking a picture is focusing. The most commonly used types of
automatic focusing are [25]:

• Phase detection autofocus (PDAF). Its management is done by applying


photodiodes through the sensor. The focusing element is moved in the lens to
focus the image. It is a slow and inaccurate system due to the use of
photodiodes.

7
Product Design

• Dual pixel. This method uses more focus points along the sensor than the
PDAF. This system uses two photodiodes at each pixel to compare minimal
dissimilarities. This is the most effective focusing technology.

• Contrast detection. It is the oldest of the three systems exposed. Its operation
theoretically bases that the contrast of an image is greater, and its edges are
appreciated in a clearer way, when it is focused correctly. The disadvantage is
its slowness.

5.10.2 Perspective

A photograph is a perspective image of an object. If straight lines are drawn from


all points of an object to a fixed point (called point of view or center of projection)
and lines are considered that cross an intermediate surface (called projection sur-
face), the image is drawn on this surface and is known as perspective [1, 26].
The camera is responsible for executing and materializing perspectives of
objects. The projection surface is the flat extension of the image sensor or the
capture surface. Focal distance is the orthogonal distance separating the viewpoint
from the projection surface. Knowing the distance between the point of view and
the plane that contains the points of the object, the focal distance with which the
photograph was taken and the inclination of the plane in which the points of the
object to be measured are located with respect to the projection plane, the reliable
coordinates of the points can be disintegrated, using basic trigonometry (Figure 7).
The orthogonal and the geometric perspectives are the most widely used in
photogrammetry. Using a conventional camera (reel or digital), a geometric per-
spective will be plotted. From a photograph in which the points of the object to be
measured are in a plane parallel to the projection plane or the one on which the
photographic film is spread, the real position of the points in space is obtained by
using Eqs. (18)–(19):

x X
¼ (18)
X Z

Figure 6.
Solution to (a) different openings, (b) shutter speeds, and (c) ISO.

8
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Figure 7.
Diagram of the projection of a camera.

y Y
¼ (19)
f Z

where f is the focal length, (X, Y, Z) are the actual coordinates of the point, and P
(x,y) are the coordinates on the projection plane of the image or photograph.
It would be in front of more complex expressions if the planes that contain the
points are not parallel to the one of projection, being indispensable to know the
inclination of the plane having as reference the plane of projection. In practice, in
order to avoid complications in the calculation of coordinates, photographs are
usually taken in a way that the planes are parallel.

5.10.3 Exposure

It is based on the capture of a scene by means of a sensitive material. In analog


photography, this corresponds to the film and in digital photography, the sensor.
Exposure is based on three variables to control the entry of light into the focal plane
(sensor) and achieve an adequate exposure [9]:

1. ISO Sensitivity: it indicates the amount of light required to take a picture. The
higher the light, the lower the ISO.

2. Diaphragm opening: it inspects the light reaching the focal plane, along with
the shutter speed, and regulates the depth of field of the photograph.

3. Shutter speed: shutter opening time allows light to reach the sensor. The higher
the shutter speed, the lower the percentage of light reaching the sensor.

When a sensor has the ability to capture as many tones (dynamic range) and
information (light) as its ability allows, the picture is perfectly exposed.

5.10.4 Dynamic range

It measures the amount of light and dark tones that a camera has the ability to
capture in the same picture. It shows the amount of tonal nuances that a camera is
capable of capturing, measurable by contrast and sharpness.
Contrast and sharpness are based on the differentiation of tonality with which a
pair of white and black lines are obtained, captured, or reproduced. It is measurable
of the degree of detail, being 100% when both lines can be perfectly differentiated
as pure whites and blacks. Resolution and contrast are closely related concepts.

9
Product Design

Figure 8.
Contrast sensitivity change as a function of the spatial frequency of the target.

If the contrast falls below 5%, it is difficult to observe any detail, which is shown
more clearly and distinctly the higher it is. Frequency and modulation are shown in
the way they are altered when light passes through the different optical components
of the lens of the photographed image, thanks to contrast transfer functions. As the
viewer moves away, a substantial loss of contrast begins to be noticed [12].
By performing a contrast correction, different filters are applied to the central
zones instead of the peripheral zones. An example of contrast and resolution is
shown in Figure 8.

5.10.5 Aberrations

One of the most outstanding components of a camera is the photographic lens,


which produces a series of aberrations that distort the images of the photographs,
making difficult to visualize the correct dimensions of the object [27, 28]. There are
different types of aberrations, being the most common in photographic lenses:

1. Point aberrations: housed in the position arranged by the paraxial optics. It is a


“stain” instead of a point. There are also chromatic aberration, spherical
aberration, astigmatism, and coma.

2. Shape aberrations: the point is shown as a point but with a different position to
the one arranged by means of paraxial approximation. This is a systematic
error and can be of two types: field curvature and distortions.

• Field curvature: defect when creating the image, being curved instead of
flat. It is difficult to correct the aberration, but it can be mitigated in a low
percentage.

• Distortion: only affects the shape of the image. It occurs due to the
difference in the scale of reproduction of the image off-axis. If an object
with straight lines is photographed, such as a square, the center lines will
appear straight, and the edge lines will curve inward or outward producing
the so-called barrel or cushion distortions. This aberration is not corrected
by closing the diaphragm. This error affects the tone of the image and needs
to be corrected.

10
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

5.10.6 Environmental conditions

Stability of environmental conditions must be achieved:

1. Temperature: the ideal temperature for taking a photograph should be


between approximately 18 and 26° in order to avoid dilatation of the lens.

2. Wind: calm wind, to avoid hindrances when taking the photo.

3. Illumination: sufficient light bulb. In most cases, natural light is not sufficient,
and it is necessary to use spotlights or other artificial elements.

Other significant parameters, such as the texture of the element, significantly


help the quality of the 3D reconstruction, and optimal results are obtained with the
highest level of ambient light (exposure 1/60, f/2.8, and ISO sensitivity 100). The
surface of an element should be opaque, with Lambertian reflection and surface
homogeneity. A single point on the surface of the object must be visible from at
least two or more sensors [26, 29].

5.10.7 Image quality

Image quality is a prerequisite for working with it properly. There are two main
characteristics that define it:

1. Resolution in amplitude (bit depth): number of bits per point of an image

2. Spatial resolution: the number of pixels per unit area

Image processing is the transformation of an input image into an output image.


It is carried out to facilitate the analysis of the image and to obtain a greater
reliability of this [30]. Among the transformations, those that eliminate noise or
variation in the intensity of the pixels stand out. There are two types of operations:
individual operations (rectification or binarization) and neighborhood operations
(filtering).

5.10.8 Histogram

This is a visual tool very useful for the study of digital images. With the naked
eye, it is possible to study the contrast or the distribution of intensities, because it
follows the following discrete function of Eq. (20):

f ðxi Þ ¼ ni (20)

where x is the level of gray or color and n is the number of pixels in the image
with this value. The histogram is normalized in values ranging from 0 to 1. In
Figure 9 it is possible to see their different zones [18, 31].
The most common errors in the image, which prevent good image quality, can
be identified in the histogram and are muted tones, black areas, overexposure or
burned areas, and backlight. In order to know that a good image is acquired, the
best thing is to have a histogram that has the shape of a Gauss bell, that is to say,
that has the most information in the central part and less in the extremes. Another
important point is that the histogram must embrace and reach both ends, so as to
ensure that there are blacks and whites in the photograph.

11
Product Design

Figure 9.
Histogram areas.

5.10.9 Binarization

The representation of an image with two values is obtained. The dimensions of


the image are still preserved. The decision threshold must be chosen correctly and
used in a step filter with an algorithm similar to Eq. (21):
(
0 f ðx, yÞ > k
g ðx, yÞ ¼ (21)
1 f ðx, yÞ ≤ k

where 0/1 represents the black/white values and f is the value of the gray tone of
the coordinates (x,y) [32]. Figure 10 shows a grayscale image versus a binary.
To obtain an image with sufficient quality, the binarization must correspond
with white pixels to the objects of interest, being the blacks of the environment. If
the object of interest turns out to be darker than the environment, a reversal is
applied after the binarization. The most important point in the process is the
calculation of the threshold. There are different methods for this: histogram, clus-
tering, entropy, similarity, spatial, global, and local.
The setting of the threshold value is latent, due to its difficulty, in all methods.
The techniques are supported by statistics applied to the histogram. They are as
follows: carry error method, Otsu method, and Saulova’s pixel deviation method.

5.10.10 Spatial filtering

It is based on a convolution operation between the two-dimensional functions


image, f, and a nucleus, called h, in digital images. This operation aims to transform

Figure 10.
Grayscale (left) and binary (right).

12
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Figure 11.
Types of lens distortion.

the value of a pixel p into the position (x,y), always taking into account the values of
the adjacent pixels. For this operation, a weighted sum of the values of the neigh-
boring points of this point p is required. A mask (h), behaving like a filter, is in
charge of exposing the values of the weighting. The size of the mask varies
according to the pixels used.

5.10.11 Geometrical transformations

These operations modify the spatial coordinates of the image. There are several
operations that are easy to understand and apply, such as interpolation, rotation,
rectification, and distortion correction.

5.10.12 Lens distortion

Due to the geometry of the lens, it reproduces a square object with variations in
its parallel lines. There are three types of distortion: barrel, pincushion, and mus-
tache (combination of the first two) (Figure 11) [25, 33]. This error is negligible in a
photograph of a natural scene, but to take engineering measurements and obtain a
virtual object, it is necessary to compensate for the distortion. There is a mathe-
matical model for the treatment of distortion.
The barrel distortion is centered and symmetrical. Therefore, to correct the
distortion of a certain point, a radial transformation is performed, expressed math-
ematically in Eq. (22):
  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  
^  xd
x 2  2 x  xd
¼ L ð x  x d Þ þ y  yd (22)
^y  yd y  yd

where ðx^, ^yÞ represents the result of the distortion correction at point (x, y), (xd,
yd) represents the center of the distortion which is usually a point near the center of
the image, and finally the radial function L(r) determines the magnitude of the
distortion correction as a function of the distance from the point to the center of
distortion [34].
The radial function L(r) is performed by applying two strategies. The first one
gives rise to the so-called polynomial models (Eq. (23)):

LðrÞ ¼ 1 þ k1 r2 þ k2 r4 þ … þ kn r2n (23)

The second one is based on an approach (Eq. (24)):

1
LðrÞ ¼ (24)
1 þ k1 r2 þ k2 r4 þ … þ kn r2n

13
Product Design

Figure 12.
Visual example of photo rectification.

The values k1–kn are called distortion model parameters. These values, together
with the distortion center coordinates (xd, yd), completely represent the distortion
model. The distortion of the lens is represented by the ki coefficients. They are
obtained from a known calibration image.

5.10.13 Rectification (perspective distortion)

Image correction is necessary because either it is difficult to keep the optical axis
vertical at all points of the shot or the axis is tilted toward the vertical. Vertical
images are obtained free of displacement because of the inclination of the shot but
still have inclinations, product of the depth of the workpiece. Displacements can be
suppressed by applying differential grinding or orthorectification process. In the
original digital image or a scan, the technique is applied pixel by pixel. In a scanned
image, the initial data are the coordinates of the control points. The procedure is
divided into two steps:

1. Determination of the mathematical transformation related to real coordinates


and those belonging to the image

2. Achievement of new image, being aligned to the reference system

After this process, it is necessary to know that all the pixels of the resulting
orthophotography have their level of gray, performing a digital resampling [17, 34].
Figure 12 shows an unrectified (left) and rectified (right) photograph.
Several resamples are made on the initial image. Three resampling methods are
regularly used: bilinear interpolation, nearest neighbor, and bicubic convolution.
The transformations to be applied to the images are [19] Helmert transformation;
affine transformation; polynomial transformation; and two-dimensional projective
transformation.

6. Obtaining a 3D model from 2D photographs

To obtain a 3D model of an object from a 2D one, photographs must be taken


from different views, with adequate quality. From these photographs, the recon-
struction process begins.
3D reconstruction is the process by which real objects are reproduced on a
computer. Nowadays there are several reconstruction techniques and 3D mesh
methods, having a function to obtain an algorithm that is able to make the connec-
tion of the set of representative points of the object in form of surface elements.

14
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

The efficiency with which the techniques are used will be linked to the final quality
of the reconstruction.
The stereoscopic scene analysis system presented by Koch uses image matching,
object segmentation, interpolation, and triangulation techniques to obtain the 3D
point density map. The system is divided into three modules: sensor processing,
image pair processing, and model-based sequence processing.
Pollefeys features a 3D reconstruction process based on well-defined stages. The
input is an image sequence, and the output of the process is a 3D surface model. The
stages are the following: image ratio, structure and motion recovery, dense
matching, and model construction.
Another proposal is expressed by Remondino. He presents a 3D reconstruction
system following these steps: image sequence acquisition and analysis, image
calibration and orientation, matching process and the generation of points, and 3D
modeling [18].

6.1 From a photograph

It is used in revolutionary pieces. With only one photograph, it is possible to


obtain the axis and dimensions. In 1978 Barrow and Tenenbaum demonstrated that
the orientation of the surface along the silhouette can be calculated directly from
the image data, resulting in the first study of silhouettes in individual views.
Koenderink showed that the sign of the silhouette’s curvature is equivalent to that of
the Gaussian curvature. Thus, concavities, convexities, and inflections of the sil-
houette indicate hyperbolic, convex, and parabolic surface points, respectively.
Finally, Cipolla and Blake exposed that the curvature of the silhouette has the
corresponding sign as the normal curvature along the contour generator in the
perspective projection. A similar result was derived for the orthographic projection
by Brady [35].
First, the silhouette ρ of a surface of revolution (SOR) is extracted from the
image with a Canny edge detector, and the harmonic homology W that maps each
side of ρ to its symmetrical complement is predictable by minimizing the geometric
detachments among the original silhouette ρ and its transformation version
ρ´ = Wρ. The image is rectified, and the axis of the figure is rotated and put in
orthogonal projection (Figure 13).
The apparent contour is first manually segmented from the rectified silhouette.
This can usually be done easily by removing the upper and lower elliptical parts of
the silhouette. The points are then sampled from the apparent contour, and the
tangent vector (i.e., x_ ðsÞ and y_ ðsÞ) at each sample point is calculated by fitting a
polynomial to the neighboring points.
For Ψ 6¼ 0, Rx(Ψ) first transforms the display vector p(s) and the associated
surface normal n(s) at each sample point: the transformed display vector is normal-
ized so that its third coefficient becomes one, and the following Eqs. (25)–(26) can
be used to recover the depth of the sample point:
2 3
_yðsÞ
1 6 7
nðsÞ ¼ 4 x_ ðsÞ 5 (25)
αn ðsÞ
xðsÞ_yðsÞ  x_ ðsÞyðsÞ

where
 
 dpðsÞ

αn ðsÞ ¼ pðsÞx (26)
ds 

15
Product Design

Figure 13.
Harmonic homology of the figure and its transformation to orthogonal projection [35].

6.2 From two photographs

This section is based on an investigation using a practical heuristic method, for


the reconstruction of structured scenes from two uncalibrated images. The method
is based on an initial estimation of the main homographies of the initial 2D point
coincidences, which may contain some outliers, and the homographies are recur-
sively refined by incorporating the point and line support coincidences on the main
spatial surfaces. The epipolar geometry is then recovered directly from the refined
homogenies, and the chambers are calibrated from three orthogonal vanishing
points, and the infinite homography is recovered.
First, a simple homography-guided method is proposed to fit and match the line
segments between two views, using Canny edge detector and regression algorithms.
Second, the cameras are automatically calibrated with the four intrinsic parameters
that vary between the two views. A RANSAC mechanism is adopted to detect the
main flat surfaces of the object from 2D images. The advantages of the method are
that it can build more realistic models with minimal human interactions and it also
allows more visible surfaces to be reconstructed on the detected planes than tradi-
tional methods that can only reconstruct overlapping parts (Figure 14).

6.3 Though more than two photographs

6.3.1 Reconstruction of geological objects

This is one of the fields where photogrammetry is most applied nowadays. In


this specific point, the reconstruction is carried out applying Delaunay’s triangula-
tion and the tetrahedron. Many data models based on tetrahedron mesh have been
developed to represent the complex objects in 3D GIS.

Figure 14.
The matching results of the line segments in four main planes [36].

16
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

The tetrahedron grid can only be used to represent the geometrical structure of
geological objects. The natural characteristics of geological objects are reflected in
their different attributes, such as different rock formations, different contents of
mineral bodies, etc. It is defined that the attribute value of the internal point can be
linearly interpolated from the attribute values in four vertices in a tetrahedron. But
the attributes could change suddenly between different formations and different
mineral bodies. To cope with sudden changes, interpolation of the tetrahedron is
needed that can only be applied to six sides of a tetrahedron. Those interpolated
points are only used as time data for the following processing [37].

6.3.2 Reconstruction of objects with high surface and texture resolution

This section presents a robust and precise system for the 3D reconstruction of
real objects with shapes and textures in high resolution. The reconstruction method
is passive, and the only information required is 2D images obtained with a camera
calibrated from different viewing angles as the object rotates on a rotating plate.
The triangle surface model is obtained through a scheme that combines the octree
construction and the walking cube algorithm. A texture mapping strategy based on
surface particles is developed to adequately address photographic-related problems
such as inhomogeneous lighting, lights, and occlusion [38]. To conclude, the results
of the reconstruction are included to demonstrate the quality obtained (Figure 15).
The scheme combining octree construction and isolevel extraction through
marching cubes is presented for the problem concerning the shape of the silhouette.
The use of octree representation allows to reach very high resolutions, while the
method of fast walking cubes is adapted through a properly defined isolevel func-
tion to work with binary silhouettes, resulting in a mesh of triangles with vertices
precisely located in the visual object.
Calibration is performed on the camera and rotary table. One of the problems
found is the discontinuity of the texture due to the nonhomogeneous lighting in
different parts of the element due to shadows.
Next, the octree is represented. An octree is a hierarchical tree structure that can
be used to represent volumetric data in terms of cubes of different sizes. Each octree
node corresponds to a cube in the octree space that is entirely within the object. This
opens up different possibilities: voxels, particles, triangles, and more complicated

Figure 15.
Flowchart to the reconstruction of objects.

17
Product Design

Figure 16.
From cube to triangulation, adapted from [38].

parametric primitives, such as splines or NURBS. Voxels are used to represent


volumes but can also be used to represent surfaces. A related primitive is a particle
that is defined by its color, orientation, and position. By marching the cube trian-
gulation of the octree, the white and black points denote the corners of the cube that
are inside and outside, respectively, while the gray points are the points of the
triangle’s vertex on the surface (Figure 16).
The application of the isolevel function calculated by means of the dichotomous
subdivision procedure allows for the construction of a faithful model of the object.
The triangular vertices that make up the object’s mesh are placed precisely on the
surface of the digitized model even at low resolutions. This creates an efficient
compromise between resolution and geometric accuracy. The octree construction
followed by the walking cube algorithm generates a triangular mesh consisting of an
excessive number of triangles, which must be simplified.

6.3.3 Object reconstruction

The reconstruction of objects is mainly based on the archeological field. The


process to obtain the 3D model will be governed by Figure 17.
First of all, corresponding or common characteristics must be found among the
images of the object. The process occurs in two phases:

1. The reconstruction algorithm generates a reconstruction in which dimensions


are not correctly defined. A self-calibration algorithm performs a
reconstruction equivalent to the original one, formed by a set of 3D points.

2. All the pixels of an image are made to coincide with those of the neighboring
images so that the system can reconstruct these points.

The system selects two images to set up an initial projective reconstruction


frame and then reconstructs the matching feature points through triangulation.
Then a dense surface estimation is performed. To obtain a more detailed model
of the observed surface, a dense matching technique is used. The 3D surface is

18
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Figure 17.
Steps to obtain the 3D model, adapted from [39].

approached with a triangular grid, to reduce geometric complexity and adapt the
model to the requirements of the computer graphic display system. Then construct
a corresponding 3D mesh by placing the triangle vertices in 3D space according to
the values found in the corresponding depth map. To reconstruct more complex
shapes, the system must combine multiple depth maps. Finally, it is provided with
texture.

6.3.4 3D reconstruction of the human body

It is used for medical purposes in many cases, as a base for implants, splints, etc.
The process consists of the following parts: acquisition and analysis of the image
sequence; calibration and orientation of the images; matching process on the sur-
face of the human body; and generation and modeling of the point cloud. Once the
necessary images have been obtained from different points of view, the calibration
and orientation of the images are carried out.
The choice of the camera model is often related to the final application and the
required accuracy. The correct calibration of the sensor used is one of the main
objectives. Another important point is image matching [40].
To evaluate the quality of the matching results, different indicators are used: an
ex post standard deviation of the least squares adjustment, the standard deviation of
the change in the x-y directions, and the shift from the initial position in the x-y
directions. The performance of the process, in the case of uncalibrated images, can
only be improved with a local contrast enhancement of the images.
Finally, 3D reconstruction and modeling of the human body shape is performed.
The 3D coordinates of each matching triplet are calculated through a forward
intersection. Using collinearity and the results of the orientation process, the 3D
paired points are determined with a solution of least squares. For each triplet of
images, a point cloud is calculated, and then all the points are joined together to
create a unique point cloud. A spatial filter is applied to reduce noise and obtain a
more uniform point cloud density. Figure 18 shows the results before and after
filtering (approximately 20,000 points, left); a view of the recovered point cloud
with pixel intensity (center); and a 3D human model (right).
The system is composed of two main modules. The first one is in charge of image
processing, to determine the depth map in a pair of views, where each pair of
successive views follows a sequence of phases: detection of points of interest,
correspondence of points, and reconstruction of these. In this last phase, the

19
Product Design

Figure 18.
3D reconstruction of a human body, adapted from [40].

parameters that describe the movement (rotation matrix R and translation vector T)
between the two views are determined. This sequence of steps is repeated for all
successive pairs of views of the set.
The second module is responsible for creating the 3D model, for which it must
determine the total 3D points map generated. In each iteration of the previous
module, the 3D mesh is generated by applying Delaunay’s triangulation method.
The results obtained from the process are modeled in a virtual environment to
obtain a more realistic visualization of the object [16].
The number of detected minutiae is related to the number of reconstructed 3D
points and the quality of that reconstruction (higher number of details). Therefore,
the higher the number of points on the map, the more detailed areas are obtained. In
some cases this does not apply, due to the geometry of the object, for example, in a
cube, more points can result in a distorted object.

7. Conclusion

The technological development of 3D photogrammetry makes it a real option in


the various applications of 3D scanners. Among the different benefits it brings are
faster raw data acquisition, simplicity, portability, and more economical equipment.
Different studies have verified the accuracy and repeatability of 3D photogramme-
try. These investigations have compared the digital models of objects obtained from
2D digital photographs with those generated by a 3D surface scanner. In general, the
meshes obtained with photogrammetric techniques and with scanners show a low
degree of deviation from each other. The surface settings of photogrammetric
models are usually a little better. For these reasons, photogrammetry is a technology
with an infinite number of engineering applications.
In this chapter the basic fundamentals, the characteristics of the acquisition, and
the aspects to be taken into account to obtain a good virtual model from photo-
grammetry have been explained.

Acknowledgements

The authors would like to thank the call for Innovation and Teaching
Improvement Projects of the University of Cadiz and AIRBUS-UCA Innovation
Unit (UIC) for the Development of Advanced Manufacturing Technologies in the
Aeronautical Industry.

20
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Conflict of interest

The authors declare no conflict of interest.

Author details

Ana Pilar Valerga Puerta1*, Rocio Aletheia Jimenez-Rodriguez2,


Sergio Fernandez-Vidal2 and Severo Raul Fernandez-Vidal1

1 Department of Mechanical Engineering and Industrial Design, School of


Engineering, University of Cadiz, Cadiz, Spain

2 Reanimalia. Rehabilitacion and Ortopedia Veterinaria, Cadiz, Spain

*Address all correspondence to: anapilar.valerga@uca.es

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms
of the Creative Commons Attribution License (http://creativecommons.org/licenses/
by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.

21
Product Design

References

[1] Rolin R, Antaluca E, Batoz JL, [8] Katz D, Friess M. Technical note: 3D
Lamarque F, Lejeune M. From point from standard digital photography of
cloud data to structural analysis through human crania - A preliminary
a geometrical hBIM-oriented model. assessment. American Journal of
Journal of Cultural Heritage. 2019;12: Physical Anthropology. 2014;154:
1-26. DOI: 10.1145/3242901 152-158. DOI: 10.1002/ajpa.22468

[2] Valerga AP, Batista M, Bienvenido R, [9] Martorelli M, Lepore A, Lanzotti A.


Fernández-Vidal SR, Wendt C, Quality analysis of 3D reconstruction in
Marcos M. Reverse engineering based underwater photogrammetry by
methodology for modelling cutting bootstrapping design of experiments.
tools. Procedia Engineering. 2015;132: International Journal of Mechanical
1144-1151. DOI: 10.1016/j. Sciences. 2016;10:39-45
proeng.2015.12.607
[10] Guerra MG, Lavecchia F,
[3] Rabbani T, Dijkman S, van den Maggipinto G, Galantucci LM,
Heuvel F, Vosselman G. An integrated Longo GA. Measuring techniques
approach for modelling and global suitable for verification and repairing of
registration of point clouds. ISPRS industrial components: A comparison
Journal of Photogrammetry and Remote among optical systems. CIRP Journal of
Sensing. 2007;61:355-370. DOI: 10.1016/ Manufacturing Science and Technology.
j.isprsjprs.2006.09.006 2019;27:114-123. DOI: 10.1016/j.
cirpj.2019.09.003
[4] Schenk T. Introduction to
Photogrammetry. Department of Civil [11] Givi M, Cournoyer L, Reain G,
and Environmental Engineering and Eves BJ. Performance evaluation of a
Geodetic Science. Athens, USA: The portable 3D imaging system. Precision
Ohio State University; 2005. pp. 79-95. Engineering. 2019;59:156-165. DOI:
Available from: http://gscphoto.ceegs. 10.1016/j.precisioneng.2019.06.002
ohio-state.edu/courses/GeodSci410/
docs/GS410_02.pdf [12] Aguilar R, Noel MF, Ramos LF.
Integration of reverse engineering and
[5] Derenyi EE. Photogrammetry: The non-linear numerical analysis for the
Concepts. Canada: Department of seismic assessment of historical adobe
Geodesy and Geomatics Engineering buildings. Automation in Construction.
University of New Brunswick; 1996. 2019;98:1-15. DOI: 10.1016/j.
DOI: 10.1017/9781108665537.002 autcon.2018.11.010

[6] Ackermann F. Digital image [13] Murphy M, McGovern E, Pavia S.


correlation: Performance and potential Historic building information modelling -
application in photogrammetry. The Adding intelligence to laser and image
Photogrammetric Record. 1984;11: based surveys of European classical
429-439. DOI: 10.1111/j.1477-9730.1984. architecture. ISPRS Journal of
tb00505.x Photogrammetry and Remote Sensing.
2013;76:89-102. DOI: 10.1016/j.
[7] Sansoni G, Trebeschi M, Docchio F. isprsjprs.2012.11.006
State-of-the-art and applications of 3D
imaging sensors in industry, cultural [14] Dai F, Lu M. Assessing the accuracy
heritage, medicine, and criminal of applying photogrammetry to take
investigation. Sensors. 2009;9:568-601. geometric measurements on building
DOI: 10.3390/s90100568 products. Journal of Construction

22
Photogrammetry as an Engineering Design Tool
DOI: http://dx.doi.org/10.5772/intechopen.92998

Engineering and Management. 2010; surface area and three dimensional


136:242-250. DOI: 10.1061/(ASCE) shape measurement of coral skeletons.
CO.1943-7862.0000114 Limnology and Oceanography:
Methods. 2010;8:241-253. DOI: 10.4319/
[15] Wrobel BP. The evolution of digital lom.2010.8.241
photogrammetry from analytical
photogrammetry. The Photogrammetric [22] Kaufman J, Clement M, Rennie AE.
Record. 1991;13:765-776. DOI: 10.1111/ Reverse engineering using close range
j.1477-9730.1991.tb00738.x photogrammetry for additive
manufactured reproduction of Egyptian
[16] Styliadis AD, Sechidis LA. artifacts and other Objets d’art
Photography-based façade recovery & (ESDA1014-20304). Journal of
3-d modeling: A CAD application in Computing and Information Science in
cultural heritage. Journal of Cultural Engineering. 2015;15:1-7. DOI: 10.1115/
Heritage. 2011;12:243-252. DOI: 1.4028960
10.1016/j.culher.2010.12.008
[23] Bernard A. Reverse engineering for
[17] Murtiyoso A, Grussenmeyer P, rapid product development: A state of
Börlin N. Reprocessing close range the art. Three-Dimensional Imaging,
terrestrial and uav photogrammetric Optical Metrology, and Inspection. 1999;
projects with the dbat toolbox for 3835:50-63. DOI: 10.1117/12.370268
independent verification and quality
control. International Archives of the [24] Adeline KRM, Chen M, Briottet X,
Photogrammetry, Remote Sensing and Pang SK, Paparoditis N. Shadow
Spatial Information Sciences - ISPRS detection in very high spatial resolution
Archives. 2017;42:171-177. DOI: aerial images: A comparative study.
10.5194/isprs-archives-XLII-2-W8-171- ISPRS Journal of Photogrammetry and
2017 Remote Sensing. 2013;80:21-38. DOI:
10.1016/j.isprsjprs.2013.02.003
[18] Remondino F, Fraser C. Digital
camera calibration methods: [25] Yi XF, Long SC. Precision
Considerations and comparisons. The displacement measurement of single
International Archives of the lens reflex digital camera. Applied
Photogrammetry, Remote Sensing and Mechanics and Materials. 2011;103:
Spatial Information Sciences. 2006;36: 82-86. DOI: 10.4028/www.scientific.
266-272 net/AMM.103.82

[19] Bister D, Mordarai F, Aveling RM. [26] Webster C, Westoby M, Rutter N,


Comparison of 10 digital SLR cameras Jonas T. Three-dimensional thermal
for orthodontic photography. Journal of characterization of forest canopies using
Orthodontics. 2006;33:223-230. DOI: UAV photogrammetry. Remote Sensing
10.1179/146531205225021687 of Environment. 2018;209:835-847.
DOI: 10.1016/j.rse.2017.09.033
[20] Bill Triggs AWF, McLauchlan PF,
Hartley RI. In: Triggs B, editor. Bundle [27] Galantucci LM, Lavecchia F,
Adjustment—A Modern Synthesis Bill. Percoco G, Raspatelli S. New method to
Vision Algorithms ’99. LNCS 1883; calibrate and validate a high-resolution
2000:298-372. DOI: 10.3760/cma.j. 3D scanner, based on photogrammetry.
issn.2095-4352.2016.06.018 Precision Engineering. 2014;38:279-291.
DOI: 10.1016/j.precisioneng.2013.10.002
[21] Veal CJ, Holmes G, Nunez M,
Hoegh-Guldberg O, Osborn J. A [28] Menna F, Nocerino E,
comparative study of methods for Remondino F. Optical aberrations in

23
Product Design

underwater photogrammetry with flat views. Image and Vision Computing.


and hemispherical dome ports. In: 2004;22:829-836. DOI: 10.1016/j.
Videometrics, Range Imaging, and imavis.2004.02.003
Applications XIV. SPIE Optical
Metrology. Munich, Germany: SPIE;. [36] Wang G, Tsui HT, Hu Z.
2017;1033205:1-14. DOI: 10.1117/ Reconstruction of structured scenes
12.2270765 from two uncalibrated images. Pattern
Recognition Letters. 2005;26:207-220.
[29] Nevalainen O, Honkavaara E, DOI: 10.1016/j.patrec.2004.08.024
Tuominen S, Viljanen N, Hakala T,
Yu X, et al. Individual tree detection and [37] Xue Y, Sun M, Ma A. On the
classification with UAV-based reconstruction of three-dimensional
photogrammetric point clouds and complex geological objects using
hyperspectral imaging. Remote Sensing. Delaunay triangulation. Future
2017;9:1-34. DOI: 10.3390/rs9030185 Generation Computer Systems. 2004;
20:1227-1234. DOI: 10.1016/j.
[30] Yoo Y, Lee S, Choe W, Kim C. future.2003.11.012
CMOS image sensor noise reduction
method for image signal processor in [38] Yemez Y, Schmitt F. 3D
digital cameras and camera phones reconstruction of real objects with high
Youngjin. In: Proceedings of SPIE-IS&T resolution shape and texture. Image and
Electronic Imaging Digital Vision Computing. 2004;22:1137-1153.
Photography III. 2007. pp. 1-10. DOI: DOI: 10.1016/j.imavis.2004.06.001
10.1117/12.702758
[39] Pollefeys M, Van Gool L,
[31] Fliegel K, Havlin J. Imaging Vergauwen M, Cornelis K, Verbiest F,
photometer with a non-professional Tops J. 3D recording for archaeological
digital camera. In: SPIE 7443, fieldwork. IEEE Computer Graphics and
Applications of Digital Image Applications. 2003;May/June:20-27.
Processing XXXII. 2009. pp. 1-8. DOI: DOI: 10.1109/MCG.2003.1198259
10.1117/12.825977
[40] Remondino F. 3-D reconstruction of
[32] Gomez-Gil P. Shape-based hand static human body shape from image
recognition approach using the sequence. Computer Vision and Image
morphological pattern spectrum. Understanding. 2004;93:65-85. DOI:
Journal of Electronic Imaging. 2009;18: 10.1016/j.cviu.2003.08.006
13012. DOI: 10.1117/1.3099712

[33] Ng R, Hanrahan PM. Digital


correction of lens aberrations in light
field photography. In: International
Optical Design Conference. 2006.
p. 6342. DOI: 10.1117/12.692290

[34] Jianping Z, John G. Image pipeline


tuning for digital cameras. In: IEEE
International Symposium on Consumer
Electronics. Irving, TX; 2007.
pp. 167-170

[35] Wong KYK, Mendonça PRS,


Cipolla R. Reconstruction of surfaces of
revolution from single uncalibrated

24

You might also like