Ziemer2020 Chapter WaveFieldSynthesis
Ziemer2020 Chapter WaveFieldSynthesis
Methods of sound field synthesis aim at physically recreating a natural or any desired
sound field in an extended listening area. As discussed in Sect. 5.1.2, sound field quan-
tities to synthesize are mainly sound pressure and particle velocity or sound pressure
gradients. If perfect control over these sound field quantities was achieved, virtual
sources could be placed at any angle and distance and radiate a chosen source sound
with any desired radiation pattern. This way, the shortcomings of conventional audio
systems could be overcome: Instead of a sweet spot, an extended listening area would
exist. Without the need for panning, lateral and elevated sources could be created and
depth in terms of source distance could be implemented. If the sound radiation char-
acteristics of an instrument was captured and synthesized, virtual sources could sound
as broad and vivid as their physical counterpart. This means a natural, truly immer-
sive, three-dimensional sound experience including the perception of source width,
and motion, listener envelopment, reverberance and alike. Unfortunately, already the
theoretic core of most sound field synthesis approaches imposes several restrictions.
Current sound field synthesis implementations offer limited acoustic control under
certain circumstances with several shortcomings. Still, due to elaborate adaptions of
a sophisticated physical core sound field synthesis applications are able to create a
realism which is unreachable with conventional audio systems.
In this chapter a short historic overview of sound field synthesis is given. The most
prominent sound field synthesis approach includes the spatio-temporal synthesis of a
wavefront that propagates through space as desired. In this book this specific approach
is referred to as wave field synthesis (WFS) or wave front synthesis and is treated
in detail in this chapter. The term sound field synthesis is used as umbrella terms
covering several methods which aim at controlling a sound field. These methods
include wave front synthesis, ambisonics and alike. The theoretic core of wave field
synthesis is derived from several mathematical theorems and physical considerations.
The derivation is explained step by step in the following section.1 Several constraints
make it applicable, as discussed in Sect. 8.3. These constraints lead to synthesis errors
which are diminishable by adaptions of the mathematical core. Many sound field
synthesis approaches model sound sources as monopole sources or plane waves. So
a special treatment is given to the synthesis of the radiation characteristics of musical
instruments in Sect. 8.4. Finally, some existing sound field synthesis installations for
research and for entertainment are presented.
1 Mainly based on Pierce (2007), Williams (1999), Morse and Ingard (1986), Rabenstein et al.
(2006), Ziemer (2018).
2 See Steinberg and Snow (1934a, b).
3 See Ahrens (2012), pp. 8f and Friesecke (2007), p. 147.
4 See e.g. Gerzon (1973, 1975, 1981).
8.1 Sound Field Synthesis History 205
setup an additional channel Z is encoded, containing the pressure gradient along the
third dimension. Encoding these channels is referred to as tetraphony or B-Format.
The sound pressure is a scalar and can be recorded by an omnidirectional pressure
receiver. The pressure gradients can be recorded by figure-of-eight microphones or
approximated by the difference of to opposing omnidirectional microphones which
are out of phase. The recreation of the sound field described by these channels by
means of a loudspeaker array is referred to as periphony or ambisonics (Fig. 8.3).
206 8 Wave Field Synthesis
Ψ = KA (8.1)
the loudspeaker signals A recreate the encoded sound field at the receiver position.
The components in Ψ describe the desired sound field. The vector describes the
sound pressure and at a central listening position and the pressure gradient along
the spatial dimensions whose origin lies at this central position. The B-Format and
higher order sound field encoding by means of circular or spherical harmonics are
quasi-standardized. In contrast to conventional audio systems, as discussed through-
out Chap. 7, the encoded channels are not routed directly to loudspeakers. Only the
sound field information is stored. By solving Eq. 8.1 loudspeaker signals approxi-
mate the desired physical sound field or the desired sound impression. The solver
is the ambisonics decoder. If the desired sound field contains as many values as
loudspeakers present, the propagation matrix K in the linear equation system, Eq.
8.1, is a square matrix. In this case it can be solved directly. This can be achieved
by means of an inverse matrix, or by numerical methods like Gaussian elimination.
With more loudspeakers than target values the problem is under-determined: we have
more known target values than unknown loudspeaker signals. In this case a pseudo
inverse matrix can be used to approximate a solution. Unfortunately, this strategy
comes along with several issues. First of all this approximate solution does not con-
sider auditory perception. In the Moore Penrose inverse the Euclidean norm, i.e., the
squared error, is minimized. This means that small errors in amplitude, phase, and
time occur. These may be audible when they lie above the just noticeable difference
of level or phase, or above just noticeable interaural level, phase or time difference.5
The perceptual results of audible errors are a false source localization, especially
for listeners that are not located at the central listening position. Other perceptual
outcomes are audible coloration effects, spatial diffuseness, or an undesirably high
loudness. A psychoacoustical solution to balance these errors would be desirable.
Several ambisonics decoders have been proposed.6 Psychoacoustic evaluation of
existing decoders has been carried out.7 However, psychoacoustic considerations
should ideally be carried out already in the development process of the decoder.
The radiation method suggested in this book is a physically motivated solution, and
8 The phychoacoustic sound field synthesis approach including the radiation method and the prece-
dence fade are introduced in Chap. 9.
9 More information on HOA and NFC-HOA can be found e.g. in Ahrens and Spors (2008a), Williams
(1999), pp. 267ff, Spors and Ahrens (2008), Daniel et al. (2003), Menzies and Al-Akaidi (2007),
Daniel (2003) and Elen (2001).
10 The complex point source model is described in Sect. 5.3.1, applied in Ziemer (2014) and discussed
throughout the 1990s.12 From 2001 to 2003 they were supported by a number of
universities, research institutions and industry partners in the CARROUSO research
project funded by the European Community.13 Achievements from this project were
market-ready wave front synthesis systems.
Since then, mainly adoptions, extensions, refinements of methods or error com-
pensation14 and additional features like moving sources and complicated radiation
patterns15 have been implemented. A lot of research is still carried out in the field of
wave field synthesis. For example interfaces and techniques for more accessible cre-
ation of content and control of wave field synthesis systems are being developed.16
Another topic is to reduce the number of necessary loudspeakers either by a priori-
tized sweet area within the extended listening area or by considering psychoacoustic
effects.17 Although sound field synthesis is originally a physically reasoned approach,
psychoacoustic considerations are not superfluous. It is the auditory perception that
makes sound field synthesis systems sound as desired even though physical synthe-
sis errors are present and easily measurable. An elaborated perceptual evaluation of
synthesized sound fields receives more and more attention in the literature.18
The general idea of sound field synthesis can be traced back to Huygens’ principle.
This principle can be described by means of the Kirchhoff–Helmholtz-Integral which
is explained in this section. Although often considered as the mathematical core
of wave field synthesis, this integral is barely implemented in wave field synthesis.
Instead, the Kirchhoff–Helmholtz-Integral is reduced to the Rayleigh-Integral which
can be applied rather directly by means of an array with conventional loudspeakers.
The adoption process from the mathematical idea to the actual implementation is
explained in the subsequent section for wave front synthesis applications.
12 See e.g. papers like Berkhout et al. (1993), de Vries et al. (1994), de Vries (1996), Berkhout
et al. (1997) and Boone et al. (1999) and dissertations like Vogel (1993), Start (1997) and Verheijen
(1997).
13 Publications are e.g. Corteel and Nicol (2003), Daniel et al. (2003), Spors et al. (2003), Vaananen
(2003) and many more. More information on CARROUSO can be found in Brix et al. (2001).
14 See e.g. Gauthier and Berry (2007), Menzies (2013), Spors (2007), Kim et al. (2009), Bleda et al.
(2005).
15 See e.g. Ahrens and Spors (2008b), Albrecht et al. (2005) and Corteel (2007).
16 See Melchior (2010), Fohl (2013) and Grani et al. (2016).
17 See e.g. Hahn et al. (2016) and Spors et al. (2011), Ziemer (2018) for more information on local
wave field synthesis, and Chap. 9 and Ziemer and Bader (2015b, 2015c), Ziemer (2016) for details
on psychoacoustic sound field synthesis.
18 See e.g. Start (1997), Wierstorf (2014), Ahrens (2016), Wierstorf et al. (2013) and Spors et al.
(2013).
8.2 Theoretical Fundamentals of Sound Field Synthesis 209
Every arbitrary radiation from a sound source can be described as integral of point
sources on its surface. In addition, each point on a wave front can be considered as
origin of an elementary wave. The superposition of the elementary waves’ wavefronts
creates the advanced wave front. This finding is called Huygens’ principle and is the
fundament on which wave field synthesis is based on.
Figure 8.4 illustrates the Huygens’ principle. Figure 8.5 clarifies this illustration
by reducing it to two dimensions and splitting it into states at different points in time.
The black disk in Fig. 8.5a represents the source at t0 which creates a wavefront that
spreads out concentrically. This wavefront is illustrated in dark gray in Fig. 8.5b with
some points on it. Each point on this wave front can be considered the origin of an
elementary source, which again create a wave front, represented by the gray disks in
Fig. 8.5c. Together, these wave fronts form the propagated wave front of the original
source at a later point in time illustrated in Fig. 8.5d. The distance between those
elementary waves has to be infinitesimally small. A monopole-shaped radiation of
these elementary waves would create a second wave front at time t2 . This second wave
front would be inside the earlier wave front, closer to the original breathing sphere
again. This can clearly be seen in both Figs. 8.4 and 8.5c: One half of the elementary
waves are located inside the dark gray wave front. This is physically untrue; the
elementary waves must have a radiation characteristic which is 0 geared towards
the source. This radiation characteristic is described by the Kirchhoff–Helmholtz
integral (K-H integral), discussed in the subsequent Sect. 8.2.2.
(a) t0 : breathing sphere (b) t1 : elementary (c) t2 : wave fronts from (d) t2 : further emanated
(black). sources (black dots) on elementary sources. wave front from
emanating wave front breathing sphere.
(gray).
Fig. 8.5 Wave fronts of a breathing sphere at three points in time in 2D. The breathing sphere
at t0 (a) creates a wave front at t1 (b). Points on this wave front can be considered as elementary
sources which also create wave fronts at t2 (c). By superposition these wave fronts equal the further
emanated wave front of the breathing sphere (d). From Ziemer (2016), p. 55
The Gauss’ theorem19 states that spatial area integrals of a function over a volume
V are equal to surface integrals of the normal components of a function over the
volume’s surface S
∇f dV = fn dS (8.2)
V S
From Green’s second theorem and the wave equations, Eqs. 5.4 and 5.16, the
Kirchhoff–Helmholtz integral can be derived, which links the wave field of a source-
free volume V with sources Y on its surface S:
⎧
⎪ P (ω, X) , r ∈ V
⎨
1 ∂ P (ω, Y) ∂G (ω, Δr)
− G (ω, Δr) − P (ω, Y) dS = 21 P (ω, X) , r ∈ S
4π ∂n ∂n ⎪
⎩
S 0, r∈/V
(8.4)
Note that Eqs. 8.3 and 8.4 are nonlinear differential equations, which include the
sought-after function and its derivative. The K-H integral states that the spectrum
Fig. 8.6 Two dimensional illustration of superposition. Monopole- and dipole-source form a
cardioid-shaped radiation. After Ziemer (2018), p. 335. From Ziemer (2016), p. 57
surface need to be known do describe the wave propagation direction. The Kichhoff–
Helmholtz integral can describe wave fronts of monopole sources or plane waves as
well as complex radiation patterns and diffuse sound fields with a random distribution
of amplitudes, phases and sound propagation directions. In the illustrated example the
elementary waves have different gray levels, indicating different complex amplitudes.
So the amplitude and phase are different in any direction, as naturally observed in
musical instruments, demonstrated, e.g., for the shakuhachi in Fig. 5.7, in Sect. 5.3.1.
The volume could also be any arbitrary other geometry. It could be the surface
of a physically existing or non-existing boundary. This boundary is the separation
surface between a source volume, which contains one or more sources, and a source-
free volume, which contains the listening area. Any arbitrary closed boundary is
conceivable as long as the premises of the Gauss’ theorem are observed. Figure
8.8 illustrates three examples for a volume boundary, which will be regarded in
later chapters. Two types of setups exist: Surrounding the listener with secondary
sources—as in Fig. 8.8a and c—or surrounding the primary source(s), as illustrated
in Fig. 8.8b.22
The Kirchhoff–Helmholtz integral describes analytically how spectrum and radi-
ation on a volume surface are related to any arbitrary wave field inside a source-free
volume. It is therefore the core of wave field synthesis.23
Fig. 8.8 Three volumes V with possible source positions Q. After Ziemer (2016), p. 58
(2010), Reisinger (2002, 2003) and Spors (2007), circular array, see Spors (2007), Rabenstein et al.
(2006), Reisinger (2002, 2003) and Rabenstein and Spors (2008), and three to four lines surrounding
the listening area, see Spors et al. (2003), Reisinger (2002, 2003), Rabenstein et al. (2006).
214 8 Wave Field Synthesis
For implementing such Wave Field Synthesis (WFS) systems the K-H integral has to
be adjusted to the restrictive circumstances, which leads to errors in the synthesis. A
number of constraints simplify the K-H integral in a way which allows for a technical
implementation of the theory by means of loudspeaker arrays28 :
1. Reduction of the boundary surface to a separation plane between source-free
volume and source volume
2. Restriction to one type of radiator (monopole or dipole)
3. Reduction of three-dimensional synthesis to two dimensions
4. Discretization of the surface
5. Introduction of a spatial border
The particular steps will be successively accomplished in the following subsections.
8.3.2 Rayleigh-Integrals
28 These or similar simplifications are also proposed by Rabenstein et al. (2006), p. 529.
29 See e.g. Burns (1992).
8.3 Wave Field Synthesis 215
as a rigid surface,30 leading to the Rayleigh I integral for secondary monopole sources
as already introduced in Eq. 5.29 in 5.3.3:
1 ∂ P (ω, Y)
P (ω, X) = − G D (ω, Δr) dS. (8.7)
2π ∂n
S1
Assuming G N (ω,Δr)
∂n
to be 0 satisfy the homogeneous Neumann boundary condition31
and the first term of Eq. 8.5 vanishes. This is accomplished by choosing
e−ıkΔr e−ıkΔr
G N (ω, Δr) = − , (8.9)
Δr Δr
yielding the Rayleigh II integral for secondary dipole sources:
1 ∂G (ω, Δr)
P (ω, X) = − P (ω, Y) dS. (8.10)
2π ∂n
S1
In both cases the second simplification criterion from Sect. 8.3.1 is satisfied. But since
the destructive interference outside the source-free volume is missing, P (ω, X) for
X∈ / V is not 0. A mirrored sound field in the source volume is the consequence. In
case of monopoles the sound field created by the secondary sources is identical with
the one inside the source-free volume. This effect is similar to the earlier illustration
of Huygens’ principle, Figs. 8.4 and 8.5. In case of dipole sources the phase in the
source volume is the inverse of the phase inside the source-free volume. Additionally,
the sound pressure or, respectively the particle velocity, duplicate by adding the
general solution of the Green’s function. Both cases are illustrated in Fig. 8.9 for a
one-dimensional loudspeaker array.
Both formulations do not apply for arbitrary volume surfaces but for separation
planes only.32 To ensure that any position around the listening area can be a source
position, the listening area has to be surrounded by several separation planes. If Eqs.
(a) (b)
Fig. 8.9 Desired sound field above and mirrored sound field below a separation plane according
to the Rayleigh I integral for secondary monopole sources (a) and the Rayleigh II integral for
secondary dipole sources (b). After Ziemer (2018), pp. 337 and 338
(a) Three active line arrays. (b) Two active line arrays. (c) One active line array.
Fig. 8.10 Illustration of the spatial windowing effect: A circular wave front superimposes with
virtual reflections from two (a) or one (b) additional loudspeaker array(s). When muting those
loudspeakers whose normal direction deviates from the local wave front propagation direction by
more than 90◦ (c), the synthesized wave front is much clearer. Here, the remaining synthesis error
is a truncation error, resulting from the finite length of the loudspeaker array. After Ziemer (2018),
p. 338
8.7 and 8.10 are applied to other geometries, they still deliver approximate results.33
In any case, the source-free volume has to be convex so that no mirrored sound field
lies inside the source-free volume, i.e. volume (a) in Fig. 8.8 is inappropriate.34 Since
S1 is implicitly modelled as a rigid surface, several reflections occur when a listening
area is surrounded by several separation planes. These unwanted reflections emerge
from speakers whose positive contribution to the wave front synthesis lies outside
the listening area. The portion of sound that propagates into the listening area does
not coincide with the synthesized wave propagation direction. This artifact can be
reduced by spatial “windowing”35 technique applied to the Rayleigh I integral:
P (ω, Y)
P (ω, X) = d (Y) 2G (ω, Y)
∂n
(8.11)
1, if Y − Q, n (Y) > 0
d (Y) =
0, otherwise
Here, d (Y) is the windowing function for spherical waves which is 1 if the local
propagation direction of the sound of the virtual source at the position of the secondary
source has a positive component in normal direction of the secondary source. If the
deviation is π2 or more, d (Y) becomes 0 and the speaker is muted. That means only
those loudspeakers whose normal component resembles the tangent of the wave front
of the virtual source are active. The term G (ω, Δr) describes the directivity function
of the secondary source, i.e. of each loudspeaker. The other terms are the sought-after
driving functions D of the loudspeakers36 :
P (ω, Y)
D (ω, Y) = 2d (Y) (8.12)
∂n
An example for the unwanted virtual reflections due to applying the Rayleigh integral
although surrounding the listening area from three sides is given in Fig. 8.10. The
same wave front is synthesized according to the Rayleigh integral in three ways. In (a),
three linear loudspeaker arrays are active. Here, the desired wave front superimposes
with virtual reflections from the two additional arrays. In (b), one loudspeaker line
array is muted. The contribution of these loudspeakers to the wave front synthesis
would lie above the figure, i.e. outside the listening area. Muting them does not
decrease synthesis precision in the listening area. In (c), the second line array is muted.
Now one can clearly see the desired wave front curvature. No virtual reflections are
visible. The remaining sound field synthesis error is the so-called truncation error.
It will be discussed in detail in Sect. 8.3.3.
Although considered as source- and obstacle-free field, it is to a certain extent
possible to recreate the wave field of a virtual source within the source-free volume.
This is achieved by assuming an inverse propagation and calculating a concave wave
front at the surface which focuses at the position of the virtual source and creates a
convex wave front from then on. These sources are called “focused sources”.37 Figure
8.10 already exemplifies a focused source. More examples will be given throughout
the chapter. Of course, focused sources will not work for listeners between the active
loudspeakers and the focus. For them, the wave front seems to arrive somewhere from
loudspeaker array and not from the focus. In contrast, listeners behind the focus do
not experience the concave wavefront. They simply hear the convex wave front which
seems to originate in the focus point. So focused sources reduce the extent of the
listening area.
For applications in which the audience is organized more or less in plane, it is suffi-
cient to recreate the wave field correctly for that listening plane only, rather than in
the whole listening volume. Furthermore, the best source localization resolution of
the human auditory system is in the horizontal plane as discussed in Sect. 4.4. This
is the main reason why conventional audio systems mostly focused on horizontal
audio setups, as presented in Chap. 7. Luckily, when listening to music, listeners are
often organized roughly in plane, like in many concert halls, opera houses, cinemas,
theaters, in the car, on the couch in the living room etc. Furthermore, one or several
one-dimensional distributions of loudspeakers are easier implementable than cov-
ering a complete room surface with loudspeakers. Reducing the three-dimensional
wave field synthesis to two dimensions reduces the separation plane S1 to a sepa-
ration line L1. In theory, one could simply reduce the surface integral to a simple
integral and the Rayleigh integrals would take the forms
1 ∂ P (ω, Y)
P (ω, X) = G (ω, Δr) dS1 (8.13)
2π L1 ∂n
and
1 ∂G (ω, Δr)
P (ω, X) = P (ω, Y) dS1. (8.14)
2π L1 ∂n
x
X= . (8.15)
y
This solution was satisfying if no third dimension existed, e.g. if wave fronts of the
secondary sources had no spherical but a circular or cylindrical propagation.38 Then,
the propagation function G (ω, Δr) was different, having an amplitude decay of √1r
instead of r1 . This is owed to the fact that the surface S of a circle or cylinder doubles
with a doubled circle radius rcircle
S = 2πrcircle (8.16)
in contrast to the spherical case in which it squares with the doubled radius as already
indicated in Eq. 5.24 in Sect. 5.1.6. In this case
1
I ∝ (8.17)
r
and thus
1
p∝√ . (8.18)
r
38 See e.g. Spors et al. (2008) pp. 8f, Rabenstein et al. (2006), pp. 521ff.
8.3 Wave Field Synthesis 219
So the practical benefit of 8.13 and 8.14 is minor since transducers with a cylindrical
radiation in the far field are hardly available.39 An approximately cylindrical radiation
could be achieved with line arrays of loudspeakers.40 But replacing each individual
loudspeaker by a line array of speakers contradicts our goal to reduce the number
of loudspeakers. Simply replacing cylindrically radiating speakers by conventional
loudspeakers which have a spherical radiation function leads to errors in this wave
field synthesis formulation due to the deviant amplitude decay.
The Huygens’ principle states that a wave front can be considered as consisting
of infinitesimally distanced elementary sources. An infinite planar arrangement of
elementary point sources with a spherical radiation could (re-)construct a plane wave,
since the amplitude decay which is owed to the 1/r-distance law is compensated by the
contribution of the other sources. Imagining secondary line sources with a cylindrical
radiation, linear arrangement of sources would be sufficient to create a planar wave
front. In a linear arrangement of elementary point sources, the contribution of the
sources from the second dimension is missing, resulting in an amplitude decay.
Therefore, a “2.5D-operator” including a “far field approximation” which modifies
the free-field Green’s function to approximate a cylindrical propagation is used.41
This changes the driving function to
2π |Y − Xref |
D2.5D (ω, Y) = D (ω, Y) (8.19)
ık
with Xref being a reference point in the source-free volume. This yields the “2.5-
Dimensional” Rayleigh integral42 :
∞
P (ω, X) = − D2.5D (ω, Y) G (ω, Δr) (8.20)
−∞
Taking reference points Xref parallel to the loudspeaker array, the wave field can be
synthesized correctly along a reference line. Between the speakers and the reference
line, the sound pressures are too high, behind it they are too low.
Until now, free-field conditions are assumed. However, if not installed in the free
field, reflections may occur and superimpose with the intended wave field created
by the loudspeaker system. Under the term “listening room compensation” a variety
of methods are proposed to reduce the influence of reflections. The simplest form is
passive listening room compensation which means that the room is heavily damped.
This is an approved method, applied e.g. in cinemas. However, for some listening
rooms, for example living rooms, damping is impractical. Therefore, active solutions
are proposed, like adding a filtering function which eliminates the first reflections of
pp. 153–156. The derivation of the 2.5D-operator is given in Ahrens (2012), pp. 288f.
220 8 Wave Field Synthesis
the room to the calculated loudspeaker signals.43 “Adaptive wave field synthesis”44
uses error sensors which measure errors occurring during WFS of a test stimulus
emerging e.g. from reflections. Then any WFS solution is modified by a regularization
factor which minimizes the squared error. This is of course a vicious circle since
compensation signals corrupt the synthesized wave field and are reflected, too, adding
further errors. This problem is related to the error compensation of head-related audio
systems. Due to an exponentially increasing reflection density it is hardly possible
to account for all higher order reflections. Thus, the approach is limited to first order
reflections.
8.3.2.2 Discretization
and
∞
1 ∂G (ω, Δr)
P (ω, X) = P (ω, Y) ΔrY (8.22)
2π r =−∞ ∂n
Y
43 See Horbach et al. (1999), Corteel and Nicol (2003), Spors et al. (2003, 2004, pp. 333–337,
2007b).
44 See Gauthier and Berry (2007).
45 See Spors (2008), p. 1. An adaption of WFS to the radiation characteristic of the loudspeakers is
is valid, where α is the angle between the normal direction of a loudspeaker and
the wave when striking this loudspeaker. Respectively, it can be considered as angle
between separation line L1 and the tangent of the wave front when striking the
speaker position. This leads to an adjustment of Eq. 8.23 to
c
f max = . (8.25)
2ΔY sin α
The angle α may vary depending on position and radiation of the source in a range
between π2 and 3π 2
. Two examples for α are illustrated in Fig. 8.11. to clarify the
coherency. The black disk represents the source, the dark and light gray disks the
wave front at two different points in time, just as in Figs. 8.4 and 8.5 in Sect. 8.2.1.
Undersampling creates erroneous wavefronts above f max . These erroneous wave-
fronts contain the frequencies above the critical frequency, cause perceivable changes
in sound color and disturb the localization of the virtual source.46 Two examples of
spatial aliasing are illustrated in Fig. 8.12. These illustrations contain an additional
error due to the finite number of loudspeakers. It is called truncation error and will be
discussed in detail in the subsequent subsection. Aliasing wave fronts create a spatial
comb filter effects which colors stationary signals and smear transients. They can
be heard as high-frequency echoes following the desired wave front. In the case of
focused sources, they create high-frequency pre-echoes preceding the desired wave
front. As long as the condition
c πc
|sin α (ω)| < = (8.26)
2ΔY f max ΔY ωmax
(a) Plane wave without (b) Plane wave with (c) Focused source (d) Focused source with
aliasing. aliasing. without aliasing. aliasing.
Fig. 8.12 Virtual sources with (b and d) and without (a and c) aliasing. Erroneous wave fronts
superimpose with the desired wave fronts. All synthesized wave fronts exhibit a truncation error
which has to be compensated. After Ziemer (2016), p. 69
47 See Spors et al. (2008), p. 15, Wittek (2007), pp. 96–105, Reisinger (2002), pp. 42ff, Huber
(2002), pp. 20–54.
48 See López et al. (2005).
49 See Spors et al. (2008), p. 17.
50 See Wittek (2007), p. 88.
8.3 Wave Field Synthesis 223
Fig. 8.13 Above the critical frequency, regular amplitude errors occur (a). By phase randomization
(b) the amplitude and phase distribution becomes irregular. After Ziemer (2018), pp. 340 and 341
with one group source position. Then, the lower frequency region—which offers
very precise localization cues due to the correct reconstruction of the wavefield—
is crucial for a distinct and correct localization and the wrong localization cues of
higher frequencies are neglected by the auditory system. All these psychoacoustic
phenomenons have been illuminated already in Chap. 4, especially Sects. 4.3 and
4.5.
Of course, these methods work best if the chosen distance between adjacent speak-
ers is so small that the aliasing frequency is as high as possible. Then it can even
be speculated that the influence of the frequencies above the critical frequency is
weak concerning sound coloration and localization. Spors et al. (2008) confirm this
assumption:
However, the human auditory system seems to be not too sensible to spatial aliasing if the
loudspeaker spacing is chosen in the range Δx = 10 . . . 30 cm.51
Quite a different method is to recreate the wave field not for the discrete loudspeaker
positions but for discrete listening positions sampling the listening area. The approach
is called “sound field reconstruction” or “sound field reproduction” applying least-
squares solution.52 Sampling positions are chosen under the assumption that if a
wave field is reproduced correctly on a grid satisfying the Nyquist–Shannon sampling
theorem, the wave field is correct everywhere inside the grid. This approach can be
combined with crosstalk cancellation—as discussed in Chap. 7—to create a realistic
binaural signal at discrete listening positions.53
51 Spors et al. (2008), p. 17. Note that Spors et al. (2008) name the speaker positions “x”, in this
book they are called Y.
52 Cf. Kolundzija et al. (2009b) and Kirkeby and Nelson (1993).
53 Proposed and implemented by Menzel et al. (2006).
224 8 Wave Field Synthesis
A constraint of the discrete Rayleigh integrals, Eqs. 8.21 and 8.22, to a finite number
of speaker positions is the 5th simplification of the list in Sect. 8.3.1. This creates two
borders from which the created wave front curvatures fade to the wave front of the
speaker itself. This effect is called “truncation”.54 It appears like diffraction through a
gap and has the effect that the wave field cannot be synthesized in the area beyond the
border. Furthermore, a more or less spherical wave front propagates from the border
originated in the last speaker,55 since the compensatory effect of adjacent speakers
is missing. The truncation effect can be compensated by reducing the amplitudes of
the outermost speakers. This does, however, slightly reduce the listening area extent.
Figure 8.14 shows this artifact and its correction by applying a half cosine filter at the
left end of the loudspeaker array. This gradual amplitude attenuation is referred to as
tapering. It can be seen that, due to tapering, the amplitude of the virtual wavefront
decays towards the outer positions in the listening area. The truncation error in
generally weaker in corners, where two line array meet. An example is illustrated in
Fig. 8.15. Compensation sources can compensate truncation by using speakers with
antiphased signals at the array ends.
Until now, we assumed a free field and added loudspeaker arrays as secondary
sources. So even in a highly damped room or outdoors, the assumption of a free
field barely holds. This is especially true if we have actual listeners. Luckily, loud-
speakers and listeners cause similar absorption, reflections and diffraction, no matter
whether the impinging wave front is natural or synthesized. So the presence of loud-
speakers and listeners does not seem to corrupt the wave front synthesis system. But
this might change if we consider focused sources. Listeners between the focus point
and the loudspeakers not only have trouble localizing the virtual focused source.
They also corrupt the concave wave front and, as a consequence, the synthesized
wave front is erroneous. This effect seems to be weak, as to the author’s knowledge
the effect is not addressed in the literature. Probably because the human body barely
affects low frequencies. These deflect perfectly round the listener. However, higher
frequencies may not deflect perfectly around the listener and create an undesired
wave shadow. Even high frequencies are largely absorbed, but the wave fronts of
high frequencies are erroneous, anyway, due to spatial aliasing. Furthermore, many
wave front synthesis systems are installed slightly above the audience. Consequently,
listeners are barely in the direct path between the loudspeakers and other listeners.
Much more critical is the presence of physical borders, i.e., room walls, floor and
ceiling. Reflections from these surfaces superimpose with the desired wavefront. If
54 See Start (1997), pp. 47ff, Verheijen (1997), pp. 50ff and Baalman (2008), pp. 37ff.
55 See Spors et al. (2008), p. 14.
8.3 Wave Field Synthesis 225
Fig. 8.14 Truncation effect of a virtual plane wave (a) and its compensation by applying a cosine
filter (b). The spherical truncation wave emanating from the left end of the loudspeaker array is
eliminated. The remaining error occurs from the untapered right end of the array. After Ziemer
(2016), p. 71
the wavefronts of direct sound was synthesized correctly, the room acoustics would
sound perfectly natural. But as discussed throughout this chapter, this is not the case.
First of all, the sound field is typically synthesized in one plane only. And even in
this plane, the amplitude decay is too strong, aliasing errors occur and at the ends of
the loudspeaker array synthesis errors are produced, may it be due to truncation or
due to tapering. Outside this plane, wave fronts are not controlled at all and deviate
from natural wave fronts. It follows, that especially reflections from floor and ceiling
are unnatural. Synthesizing not only direct sound but additional room acoustics is
difficult. These would always superimpose with the reverberation of the listening
room. An example of the same synthesized wave front in a free field, in presence of
a highly reflective wall, and a highly absorbing wall is illustrated in Fig. 8.16.
Damping the listening room is probably the easiest way to avoid undesired reflec-
tions. The downside is that it makes wave field synthesis systems even less flexible.
The high number of loudspeakers already affects the interior of the room and so
the installation of additional absorbers may be difficult and undesired. Therefore,
technical solutions have been developed. Reducing the undesired reverberation of
the room by technical means is referred to as active listening room compensation.
226 8 Wave Field Synthesis
(a) Synthe sized wave front (b) Synthe sized wave (c) Synthe sized wave front
in a free field. front super imposed with highly absorbing
with wall reflection. wall.
Fig. 8.16 Wave field in a free field (a), in presence of a reflective wall (b) and highly absorbing
wall (c). After Ziemer (2018), p. 343
56 The reader can refer e.g. to Spors et al. (2003, 2007a, b), Corteel and Nicol (2003).
57 See Ahrens (2012), p. 13.
58 See e.g. Ahrens (2012), p. 13.
59 See e.g. Avizienis et al. (2006), Pollow and Behler (2009) and Kassakian and Wessel (2004),
Ziemer (2009).
60 See e.g. Warusfel and Misdariis (2004), p. 3.
61 See Zotter (2009), pp. 111–152.
8.4 Sound Field Synthesis and Radiation Characteristics 227
cal instruments significantly affects the room response and leads to changes in the
perceived naturalness and loudness.62
The high quantity and quality of research in the field of wave field synthesis led
to market-ready loudspeaker systems which are able to create impressively realistic
sounds with a distinct location of the source. But typically, virtual monopole sources
or plane waves are created, which have small perceived dimensions.63 There have
been many attempts already to recreate the sound radiation characteristics of musical
instruments via sound field synthesis. Menzel et al. (2006) proposed a WFS method
to create binaural signals for a single listening position.64 Baalman (2008) uses
several monopole sources on the body of the virtual sound source to recreate its
radiation patterns.65 This approach is promising but the application is a compromise:
A small number of monopole sources does not meet the complexity of many sound
sources. A high number of monopole sources on the other hand may lead to an
optimal recreation of the radiation characteristic but the computational costs are
enormous, as already mentioned in Sect. 5.3.3 about equivalent sources methods in
microphone array measurements. However, in more than 70% of the cases subjects
of listening tests reported a higher “naturalness” for sources with complex radiation
62 As already mentioned in Sect. 6.1. See also Martín et al. (2007), p. 395, Otondo and Rindel
(2004), p. 1183.
63 See e.g. Ahrens (2012), p. 198ff.
64 See Menzel et al. (2006).
65 See Baalman (2008), p. 97ff.
228 8 Wave Field Synthesis
source extent is needed to enable us to create a sound field that creates the desired
spatial impression psychoacoustically, even if the physical wave field is different
from a natural wave field emitted by a musical instrument.70
As discussed thoroughly in Chap. 5, actual musical instruments may radiate their
sound from several vibrating surfaces and through multiple openings. This way wave
fronts interfere and create the complicated patterns that make the sound broad and
vivid. The radiation characteristics result from the extent of the body. Therefore, it
may seem paradox to simplify musical instruments as point sources, especially if
the sound radiation characteristics are to be measured, analyzed and synthesized. It
is not physically correct but mathematically simple to simplify a musical instrument
as a point. Such a point source has a singularity at its origin. From there on the
sound wave propagates as a monopole. However, it is possible to define a direction-
dependent function that describes a modification of amplitude and phase for each
direction. Then, this wave front travels spherically, like a monopole. But this wave
front is not necessarily an isobar. Amplitude and phase may vary over the spherical
wavefront. Simplifying sound sources and propagation this way is referred to as
complex point source model.71 It could be shown that propagating a source sound of
musical instruments by means of Eq. 9.1 yields a plausible sound field. The interaural
phase- and level differences of a virtual listeners decrease as his distance to the
source increases. When applying the complex point source model, the actual source
extent of musical instruments could be fairly predicted from propagated sound field
quantities. Based on the complex point source model sound radiation characteristics
could be measured and synthesized for discrete listening points in space. This can
give listeners the impression that the sound radiation characteristics are kept in the
loudspeaker playback. The approach has been implemented in an octahedron-shaped
loudspeaker array,72 shown in Fig. 8.19.
Sound field synthesis systems are still at a stage of research and development. Sys-
tems are installed in universities and in research and development departments of
companies. But in addition to that, several systems are already in practical use. They
serve for immersive audio in the entertainment sector, like cinemas, theaters, clubs,
73 Most of all the wave field synthesis system of the Technical University Berlin in cooperation
with Deutsche Telekom Laboratories, or IOSONO systems, the wave field synthesis system of
Fraunhofer IDMT. Further information on installed wave field synthesis systems can be found e.g.
in Baalman (2008), pp. 47ff, Montag (2011), Chaps. 5 and 6, Slavik and Weinzierl (2008), pp. 656f
and 664ff and IOSONO GmbH (2008).
74 See e.g. Gauthier and Berry (2008), p. 1994, Spors et al. (2003), Ahrens et al. (2010), p. 3,
Reisinger (2002), pp. 37–39, Reisinger (2003), pp. 40–44, Vogel (1993), pp. 139f, Baalman (2008),
p. 48 and Verheijen (1997), p. 103.
8.5 Existing Sound Field Synthesis Installations 231
Fig. 8.20 Circular wave field synthesis setup for research. Reproduced from Gauthier and Berry
(2008, p. 1994) with the permission of the Acoustical Society of America
Fig. 8.21 Wave field synthesis setup for research and development at Fraunhofer IDMT
232 8 Wave Field Synthesis
Fig. 8.22 Psychoacoustic Sound Field Synthesis System at the University of Hamburg. From
Ziemer (2016), p. 157
A psychoacoustic sound field synthesis system for music has been developed and
tested at the Institute of Systematic Musicology of the University of Hamburg75 and
will be introduced in detail in the subsequent chapter, Chap. 9. It consists of 15
loudspeakers synthesizing a desired sound field in a listening area of around 1 m2 .
Just as in many ambisonics systems, the spacing of 0.65 m between the loudspeakers
is rather large. In contrast to wave front synthesis and some ambisonics approaches,
every loudspeaker is active for every virtual source position. Perceptual mechanisms
of the auditory system are considered in the sound field synthesis approach so that
a precise localization and a natural and spatial sound impression are created despite
physical synthesis errors. Even beyond the listening area, the localization is rather
precise. Figure 8.22 is a photo of the installed system.
A full-duplex wave field synthesis system for communication is being developed
at the Nippon Telegraph and Telephone (NTT) lab in Tokyo.76 An individual combi-
nation of a loudspeaker- and a microphone array is installed in two separate rooms.
In a conference phone call, several subjects can talk on both sides and even move,
while all listeners on the other side can localize the speakers well. Of course, the
proximity of microphones to loudspeakers on both sides of the line can cause serious
problems. So this system focuses on echo-cancellation to suppress feedback loops.
Another important topic is real-time implementation on a singe PC, including the
signal processing for the microphone array, the wave field synthesis rendering, and
75 Its
developmental progress can be followed by referring to Ziemer (2009, 2011a, b, c, d, 2014,
2015a, b, 2016, 2017a, b, c, 2018), Ziemer and Bader (2015a, b, c, d, 2017).
76 Details can be found e.g. in Emura and Kurihara (2015).
8.5 Existing Sound Field Synthesis Installations 233
Fig. 8.23 Full duplex wave field synthesis system for communication. From Emura and Kurihara
(2015), with the permission of the Audio Engineering Society
the echo-cancellation. This is achieved by fast rendering on two GPUs. The two
systems are shown in Fig. 8.23.
The wave front synthesis system at the University of Applied Sciences Hamburg
is coupled to a motion capture system.77 This way, focused sources can be created in
such way that they are always between one tracked individual and the loudspeaker
array. On the one hand, this brings back the sweet-spot limitation of conventional
spatial audio. But on the other hand, a tracked individual can now walk around a
virtual source or be surrounded completely by a moving focused source. This offers
a new degree of user interaction, which is beneficial, e.g. for virtual reality applica-
tions. Furthermore, listeners can control virtual source locations by trackers in their
hands. Figure 8.24 is a photo of this system. The wave field synthesis system is linked
to a head-mounted display for graphical, three-dimensional virtual reality. With this
powerful combination, the WFS system of the University of Applied Sciences Ham-
burg is used to investigate the potential and limits of redirected walking.78 Here, the
translation and/or rotation of a subject can be under- or overemphasized in the virtual
auditory and visusal scene. This creates the illusion that the subjects walk paths that
exceed the actual physical room.
A wave field synthesis system for research and public events can be found in the
auditorium of the Berlin University of Technology,79 illustrated in Fig. 8.25.
The wave field synthesis system developed in Berlin is also installed at the Uni-
versity of Music and Drama Hamburg. It is in use for both concerts and research,
especially in the field of network music performance. The installed system can be
seen in Fig. 8.26. The mobile system is transported to event venues like Kampnagel
center for performing arts for demonstrations and concerts.
One WFS system containing 832 loudspeakers delivers an immersive sound expe-
rience at the Seebühne Bregenz,80 illustrated in Fig. 8.27. In contrast to conventional
PA systems with delay lines, a wave front synthesis system does not create echoes
from the rear. These can be annoying to the audience, reduce speech intelligibil-
ity and may create conflicting source localization cues. Furthermore, the amplitude
77 See e.g. Fohl and Nogalski (2013), Fohl (2013), Fohl and Wilk (2015) for details.
78 See e.g. Nogalski and Fohl (2015, 2016, 2017), Meyer et al. (2016) for details on the approaches.
79 Some details can be found in Baalman (2008), Chap. 3 and Slavik and Weinzierl (2008), p. 670.
80 See Slavik and Weinzierl (2008), p. 656.
234 8 Wave Field Synthesis
Fig. 8.24 Wave Field Synthesis System at the University of Applied Sciences Hamburg coupled to
motion capture technology. Original photo by Wolfgang Fohl, provided under Creative Commons
License. The photo is converted to grayscale
Fig. 8.25 Panoramic picture of the WFS loudspeaker system in an auditorium of Berlin University
of Technology containing 832 channels and more than 2700 loudspeakers. Pressestelle TU Berlin,
with friendly permission by Stefan Weinzierl
decay of line arrays lowers with increasing length. PA systems tend to be too loud
in proximity to the loudspeakers. This is necessary to ensure that the sound pres-
sure level is still high enough at the rear seats despite the large amplitude decay
over distance. Wave front synthesis systems can create a lower amplitude decay and
therefore have a high potential as a PA-alternative for stages with a large audience.
Wave field synthesis systems have another advantage for theater. Most large theaters
8.5 Existing Sound Field Synthesis Installations 235
Fig. 8.26 Wave field synthesis system for music installations and networked music performance
at the University of Music and Theater Hamburg
Fig. 8.27 Photo of the WFS loudspeaker system at the Seebühne Bregenz. The speakers are arranged
beside and behind the audience. From Slavik and Weinzierl (2008), p. 656
and open air locations for drama create an irritating ear/eye-conflict: The actor is
speaking somewhere on the stage but his or her voice will be localized at one of the
few PA loudspeaker towers. This can be very confusing in a scene with many actors.
A frontal, horizontal WFS line array can give additional localization cues so that the
auditory event better fits the visual scene.
In early 2009, a 189-channel wave field synthesis system had been installed at
the Casa del Suono in Parma, Italy. This museum is dedicated to the history of
236 8 Wave Field Synthesis
audio technology. The loudspeakers are installed behind curtains in a room that is
acoustically treated. Visitors can experience the sound without seeing the actual
loudspeaker system.81 Another WFS system had been installed in the Tresor club
in Berlin; a famous techno club. Here, the conditions for a wave field synthesis
setup are challenging: Standing waves emerge between the solid concrete floor and
ceiling. Furthermore, the ceiling is so low above the loudspeaker array that very early
reflections superimpose with the desired wave field. Furthermore, the techno music
has to be produced specifically for the system so that single tracks can receive their
individual virtual source locations or paths. On the other hand, wave field synthesis
offers new possibilities for spatial mixing. At the moment, conventional stereophonic
dance music tends to make only little use of hard panning.82 The reason for that is
simply that the constellation of loudspeakers and listeners in night clubs is typically
far away from the ideal stereo triangle discussed in Sect. 7.2.2. Loudspeakers may
be positioned far apart, so a sound hard-panned to either speaker might be inaudible
over a large area of the dancefloor. In many night clubs the two stereo channels are
mixed together to get rid of hard panning and incoherent loudspeaker signals. Hence,
producers of electronic dance music have an eye on mono compatibility. Wave field
synthesis systems in night clubs would give music producers and disc jockeys the
opportunity to use space as creative and dramaturgical tool instead of trying to stay
mono-compatible.
Automotives are a real challenge for both conventional stereophonic audio and
wave front synthesis systems. Due to the five seats there is a distribution of listeners,
so at least several sweet spots are desired. The exact location of the driver’s and
passengers’ heads may be unknown and may neither be in plane nor static. So sweet
spots may even be insufficient as they do not account for head- and torso move-
ment. There is limited space inside a car, so it is not easy to install line arrays of
loudspeakers or to surround the interior completely with loudspeakers in one height.
Curved loudspeaker arrays on the other hand are challenging in terms of compu-
tational effort and synthesis error compensation. Standing waves occur due to the
rather small dimensions of cars compared to audible wavelengths. Seats are obstacles
that create absorption, deflection and reflections. Their positions are readjusted to the
individual needs of the driver and the passengers, so it is challenging to include them
in a sound field synthesis calculation. Despite these issues, Audi decided to install
Fraunhofer IDMTs wave front synthesis system in the Q7. The system is depicted in
Fig. 8.28.
Another commercial system for TV-soundbars is developed and distributed by
Sonic Emotion. It promises an enlargement of the sweet spot to a sweet area and an
extension of the loudspeaker base by synthesizing plane waves. Figure 8.29 shows a
sound bar including multiple speakers that can be used to synthesize wavefronts.83
Fig. 8.28 Wave front synthesis installation in a car. Photo from Audi Technology Portal (2011),
c
Audi
Fig. 8.29 Synthesizing plane waves with multiple loudspeakers in a sound bar enlarges the sweet
spot for stereo source signals
Most of these sound field systems aim at reconstructing the spatio-temporal prop-
erties of sound waves in terms of wave front synthesis. Psychoacoustic sound field
synthesis as installed at the University of Hamburg is an alternative approach which
takes auditory perception into account in its derivation. This approach is discussed
in Chap. 9.
238 8 Wave Field Synthesis
References
Adriaensen F (2010) The WFS system at La Casa del Suono, Parma. In: Linux audio conference,
Utrecht, pp 39–45
Ahrens J (2012) Analytic methods of sound field synthesis. Springer, Berlin. https://doi.org/10.
1007/978-3-642-25743-8
Ahrens J (2016) On the generation of virtual early reflections in wave field synthesis. In: Fortschritte
der Akustik—DAGA 2016, Aachen
Ahrens J, Spors S (2008a) Analytical driving functions for higher order ambisonics. In: 2008 IEEE
international conference on acoustics, speech and signal processing, Las Vegas, NV, pp 373–376.
https://doi.org/10.1109/ICASSP.2008.4517624
Ahrens J, Spors S (2008b) Reproduction of moving virtual sound sources with special attention to
the doppler effect. In: Audio engineering society convention 124
Ahrens J, Spors S (2009) Spatial encoding and decoding of focused virtual sound sources. In:
Ambisonics symposium, Graz
Ahrens J, Geier M, Spors S (2010) Perceptual assessment of delay accuracy and loudspeaker
misplacement in wave field synthesis. In: Audio engineering society convention 128
Albrecht B, de Vries D, Jacques R, Melchior F (2005) An approach for multichannel recording and
reproduction of sound source directivity. In: Audio engineering society convention 119
Audi Technology Portal (2019) Sound systems. https://www.audi-technology-portal.de/en/
electrics-electronics/multimedia_en/sound-systems. Accessed 5 Feb 2019
Avizienis R, Freed A, Kassakian P, Wessel D (2006) A compact 120 independent element spherical
loudspeaker array with programable radiation patterns. In: Audio engineering society convention
120. http://www.aes.org/e-lib/browse.cfm?elib=13587
Baalman M (2008) On wave field synthesis and electro-acoustic music, with a particular focus on
the reproduction of arbitrarily shaped sound sources. VDM, Saarbrücken
Bai MR, Chung C, Wu P-C, Chiang Y-H, Yang C-M (2017) Solution strategies for linear
inverse problems in spatial audio signal processing. Appl Sci 7(6):582. https://doi.org/10.3390/
app7060582
Berkhout AJ (1988) A holographic approach to acoustic control. J Audio Eng Soc 36(12):977–995.
http://www.aes.org/e-lib/browse.cfm?elib=5117
Berkhout AJ, de Vries D, Vogel P (1992) Wave front synthesis: a new direction in electroacoustics.
In: Audio engineering society convention 93, vol 10. https://doi.org/10.1121/1.404755
Berkhout AJ, de Vries D, Vogel P (1993) Acoustic control by wave field synthesis. J Acoust Soc
Am 93(5):2764–2778. https://doi.org/10.1121/1.405852
Berkhout AJ, de Vries D, Sonke JJ (1997) Array technology for acoustic wave field analysis in
enclosures. J Acoust Soc Am 105(5):2757–2770. https://doi.org/10.1121/1.420330
Böhlke L (2016) Sound radiation of the violin in a virtual acoustic environment
Böhlke L, Ziemer T (2017a) Perception of a virtual violin radiation in a wave field synthesis system.
J Acoust Soc Am 141(5):3875. https://doi.org/10.1121/1.4988669
Böhlke L, Ziemer T (2017b) Perceptual evaluation of violin radiation characteristics in a wave field
synthesis system. In: Proceedings of meetings on acoustics, vol 30, no 1, p 035001. https://doi.
org/10.1121/2.0000524
Bleda S, Escolano J, López JJ, Pueo B (2005) An approach to discrete-time modelling auralization
for wave field synthesis applications. In: Audio engineering society convention 118. http://www.
aes.org/e-lib/browse.cfm?elib=13141
Boone MM, Horbach U, and de Bruijn WPJ (1999) Virtual surround speakers with wave field syn-
thesis. In: Audio engineering society convention 106, Munich. http://www.aes.org/e-lib/browse.
cfm?elib=8252
Brix S, Sporer T, Plogsties J (2001) CARROUSO–a European approach to 3D audio (abstract). In:
Audio engineering society convention 110, p 528
References 239
Burns TH (1992) Sound radiation analysis of loudspeaker systems using the nearfield acoustic
holography (NAH) and the application visualization system (AVS). In: Audio engineering society
convention 93
Cho W-H, Ih J-G, Boone MM (2010) Holographic design of a source array achieving a desired
sound field. J Audio Eng Soc 58(4):282–298. http://www.aes.org/e-lib/browse.cfm?elib=14607
Corteel E (2007) Synthesis of directional sources using wave field synthesis, possibilities, and
limitations. EURASIP J Adv Signal Process Article ID 90509. https://doi.org/10.1155/2007/
90509
Corteel E, Nicol R (2003) Listening room compensation for wave field synthesis. What can be
done? In: Audio engineering society conference: 23rd international conference: signal processing
in audio recording and reproduction, Copenhagen
Daniel J (2003) Spatial sound encoding including near field effect: introducing distance coding filters
and a viable, new ambisonic format. In: Audio engineering society conference: 23rd international
conference: signal processing in audio recording and reproduction, Copenhagen
Daniel J, Nicol R, Moreau S (2003) Further investigations of high order ambisonics and wavefield
synthesis for holophonic sound imaging. In: Audio engineering society convention 114
de Vries D (1996) Sound reinforcement by wavefield synthesis: adaption of the synthesis operator
to the loudspeaker directivity characteristics. J Audio Eng Soc 44(12):1120–1131. http://www.
aes.org/e-lib/browse.cfm?elib=7872
de Vries D, Start EW, Valster VG (1994) The wave field synthesis concept applied to sound rein-
forcement restrictions and solutions. In: Audio engineering society convention 96, Amsterdam
Elen R (2001) Ambisonics. the surround alternative. http://www.ambisonic.net/pdf/ambidvd2001.
pdf. Accessed 22 Nov 2010
Emura S, Kurihara S (2015) Echo canceler for real-time audio communication with wave field
reconstruction. In: Audio engineering society convention 139, New York, NY. http://www.aes.
org/e-lib/browse.cfm?elib=17984
Fohl W (2013) The wave field synthesis lab at the HAW Hamburg. In: Bader R (ed) Sound-
Perception-Performance. Springer, pp 243–255. https://doi.org/10.1007/978-3-319-00107-4_10
Fohl W, Nogalski M (2013) A gesture control interface for a wave field synthesis system. In:
Proceedings of international conference on new interfaces for musical expression, Daejeon +
Seoul/Republic of Korea, pp 341–346
Fohl W, Wilk E (2015) Enhancements to a wave field synthesis system to create an interactive
immersive audio environment. In: Proceedings of international conference on spatial audio, VDT
Friedrich HJ (2008) Tontechnik für Mediengestalter. Töne hören—Technik verstehen—Medien
gestalten. Springer, Berlin
Friesecke A (2007) Die Audio-Enzyklopädie. Ein Nachschlagewerk für Tontechniker. K.G. Saur,
Munich
Gauthier P-A, Berry A (2007) Adaptive wave field synthesis for sound field reproduction: theory,
experiments, and future perspectives. In: Audio engineering society convention 123
Gauthier P-A, Berry A (2008) Adaptive wave field synthesis for active sound field reproduction:
experimental results. J Acoust Soc Am 123(4):1991–2002. https://doi.org/10.1121/1.2875844
Geier M, Wierstorf H, Ahrens J, Wechsung I, Raake A, Spors S (2010) Perceptual evaluation of
focused sources in wave field synthesis. In: Audio engineering society convention 128
Gerzon M (1981) Sound reproduction systems. Patent GB 8100018
Gerzon MA (1973) Periphony: with-height sound reproduction. J Audio Eng Soc 21(1):2–10. http://
www.aes.org/e-lib/browse.cfm?elib=2012
Gerzon MA (1975) The design of precisely coincident microphone arrays for stereo and surround
sound. In: Audio engineering society convention 50, London
Goertz A, Lautsprecher, Weinzierl S (eds) (2008) Handbuch der Audiotechnik. Springer, Berlin, pp
421–490. https://doi.org/10.1007/978-3-540-34301-1_8. (Chap. 8)
Grani F, Di Carlo D, Portillo JM, Girardi M, Paisa R, Banas JS, Vogiatzoglou I, Overholt D, Serafin
S (2016) Gestural control of wave field synthesis. In: Proceedings of 13th sound and music
computing conference, Hamburg
240 8 Wave Field Synthesis
Hahn N, Winter F, Spors S (2016) Local wave field synthesis by spatial band-limitation in the
circular/spherical harmonics domain. In: Audio engineering society convention 140. http://www.
aes.org/e-lib/browse.cfm?elib=18294
Heller AJ (2008) Is my decoder ambisonic? In: Audio engineering society convention 125, San
Francisco, CA
Horbach U, Karamustafaoglu A, Rabenstein R, Runze G, Steffen P (1999) Numerical simulation of
wave fields created by loudspeaker arrays. In: Audio engineering society convention 107. http://
www.aes.org/e-lib/browse.cfm?elib=8159
Huber T (2002) Zur Lokalisation akustischer Objekte bei Wellenfeldsynthese. Diloma thesis. http://
www.hauptmikrofon.de/diplom/DA_Huber.pdf
IOSONO GmbH (2008) IOSONO—The future of spatial audio. http://www.iosono-sound.com/.
Accessed 23 Jan 2011
Kassakian P, Wessel D (2004) Characterization of spherical loudspeaker arrays. In: Audio engi-
neering society convention 117, San Francisco
Kim Y, Ko S, Choi J-W, Kim J (2009) Optimal filtering for focused sound field reproductions using
a loudspeaker array. In: Audio engineering society convention 126
Kirkeby O, Nelson PA (1993) Reproduction of plane wave sound fields. J Acoust Soc Am 94:2992–
3000. https://doi.org/10.1121/1.407330
Kolundzija M, Faller C, Vetterli M (2009a) Designing practical filters for sound field reconstruction.
In: Audio engineering society convention 127
Kolundzija M, Faller C, Vetterli M (2009b) Sound field reconstruction: an improved approach for
wave field synthesis. In: Audio engineering society convention 126
López JJ, Bleda S, Pueo B, Escolano J (2005) A sub-band approach to wave-field synthesis ren-
dering. In: Audio engineering society convention 118, Barcelona. https://www.ingentaconnect.
com/content/dav/aaua/2006/00000092/00000004/art00013
Martín RS, Witew IB, Arana M, Vorländer M (2007) Influence of the source orientation on
the measurement of acoustic parameters. Acta Acust United Acust. 93:387–397. https://www.
ingentaconnect.com/contentone/dav/aaua/2007/00000093/00000003/art00007
Melchior F (2010) Wave field synthesis and object-based mixing for motion picture sound. SMPTE
Motion Imaging J 3:53–57. https://doi.org/10.5594/j11399
Menzel D, Wittek H, Fastl H, Theile G (2006) Binaurale Raumsyntese mittels Wellenfeldsynthese—
Realisierung und Evaluierung. In: Fortschritte der Akustik—DAGA 2006, Braunschweig, pp
255–256
Menzies D (2013) Quasi wave field synthesis: efficient driving functions for improved 2.5D
sound field reproduction. In: Audio engineering society conference: 52nd international con-
ference: sound field control-engineering and perception. http://www.aes.org/e-lib/browse.cfm?
elib=16930
Menzies D, Al-Akaidi M (2007) Nearfield binaural synthesis and ambisonics. J Acoust Soc Am
121(3):1559–1563. https://doi.org/10.1121/1.2434761
Merziger G, Wirth T (2006) Repetitorium der höheren Mathematik, 5th edn. Binomi, Springe
Meyer F, Nogalski M, Fohl W (2016) Detection thresholds in audio-visual redirected walking. In:
Proceedings of 13th sound and music computing conference, SMC
Montag MN (2011) Wave field synthesis in three dimensions by multiple line arrays. Mas-
ter’s thesis. http://www.mattmontag.com/projects/wfs/Montag%20Thesis%202011%20-
%20Wave%20Field%20Synthesis%20in%20Three%20Dimensions%20by%20Multiple
%20Line%20Arrays.pdf
Morse PM, Ingard KU (1986) Theoretical acoustics. Princeton University Press, Princeton. https://
doi.org/10.1063/1.3035602
Nogalski M, Fohl W (2015) Acoustically guided redirected walking in a WFS system: design of an
experiment to identify detection thresholds. In: Proceedings of 12th sound and music computing
conference, SMC
Nogalski M, Fohl W (2016) Acoustic redirected walking with auditory cues by means of wave field
synthesis. In: Proceedings of 23rd IEEE conference on virtual reality. IEEE
References 241
Nogalski M, Fohl W (2017) Curvature gains in redirected walking: a closer look. In: Proceedings
of 24th IEEE conference on virtual reality. IEEE
Oellers H (2010) Die virtuelle kopie des räumlichen schallfeldes. http://www.syntheticwave.de/.
Accessed 27 Sept 2010
Otondo F, Rindel JH (2004) The influence of the directivity of musical instrument in a room.
Acta Acust United Acust 90:1178–1184. https://www.ingentaconnect.com/content/dav/aaua/
2004/00000090/00000006/art00017
Owsinski B (2014) The mixing engineer’s handbook, 3rd edn. Corse Technology PTR, Boston, MA
Pierce AD (2007) Basic linear acoustics. In: Rossing TD (ed) Springer handbook of acoustics.
Springer, New York, pp 25–111. https://doi.org/10.1007/978-0-387-30425-0_3. (Chap. 3)
Pollow M, Behler GK (2009) Variable directivity for platonic sound sources based in shperical
harmonics optimization. Acta Acust United Acust 95:1082–1092. https://doi.org/10.3813/aaa.
918240
Rabenstein R, Spors S (2008) Sound field reproduction. In: Benesty J, Sondhi MM, Huang Y (eds)
Springer handbook of speech processing. Springer, Berlin, pp 1095–1114. https://doi.org/10.
1007/978-3-540-49127-9_53. (Chap. 53)
Rabenstein R, Spors S, Steffen P (2006) Wave field synthesis techniques for spatial sound repro-
duction. In: Hänsler E, Schmidt G (eds) Topics in acoustic echo and noise control. Selected
methods for the cancellation of acoustical echoes, the reduction of background noise, and speech
processing. Signals and communication technology. Springer, Berlin, pp 517–545. (Chap. 13)
Reisinger G (2003) Einsatz von stereophonen Aufnahmetechniken für die räumliche Übertragung
ausgedehnter Schallquellen mit Hilfe der Wellenfeldsynthese. Diploma thesis, University of
Applied Sciences Düsseldorf, Düsseldorf
Reisinger M (2002) Neue Konzepte der Tondarstellung bei Wiedergabe mittels Wellenfeldsynthese.
Diploma thesis, University of Applied Sciences Düsseldorf, Düsseldorf
Slavik KM, Weinzierl S (2008) Wiedergabeverfahren. In: Weinzierl S (ed) Handbuch der Audiotech-
nik. Springer, Berlin, pp 609–686. https://doi.org/10.1007/978-3-540-34301-1_11. (Chap. 11)
Sonic Emotion (2012) Sonic emotion absolute 3D sound in a nutshell/stereo VS WFS. https://www.
youtube.com/user/sonicemotion3D/videos
Sonic Emotion (2017) Sonic emotion absolute 3D. https://www.youtube.com/user/
sonicemotion3D/videos
Spors S, Kuntz A, Rabenstein R (2003) An approach to listening room compensation with wave field
synthesis. In: Audio engineering society conference: 24th international conference: multichannel
audio, the new reality
Spors S, Helwani K, Ahrens J (2011) Local sound field synthesis by virtual acoustic scattering and
time reversal. In: Audio engineering society convention 131
Spors S (2007) Extension of an analytic secondary source selection criterion for wave field synthesis.
In: Audio engineering society convention 123
Spors S (2008) Investigation of spatial aliasing artifacts of wave field synthesis in the temporal
domain. In: Fortschritte der Akustik—DAGA 2008, Dresden
Spors S, Ahrens J (2008) A comparison of wave field synthesis and higher-order ambisonics with
respect to physical properties and spatial sampling. In: Audio engineering society convention 125
Spors S, Teutsch H, Kuntz A, Rabenstein R (2004) Sound field synthesis. In: Huang Y, Benesty J
(eds) Audio signal processing. For next-generation multimedia communication systems. Springer,
New York, pp 323–344. https://doi.org/10.1007/1-4020-7769-6_12. (Chap. 12)
Spors S, Buchner H, Rabenstein R, Herbordt W (2007a) Active listening room compensation for
massive multichannel sound reproduction systems using wave-domain adaptive filtering. J Acoust
Soc Am 122(1):354–369. https://doi.org/10.1121/1.2737669
Spors S, Buchner H, Rabenstein R, Herbordt W (2007b) Active listening room compensation for
massive multichannel sound reproduction systems using wave-domain adaptive filtering. J Acoust
Soc Am 122(1):354–369. https://doi.org/10.1121/1.2737669
Spors S, Rabenstein R, Ahrens J (2008) The theory of wave field synthesis revisited. In: Audio
engineering society convention 124
242 8 Wave Field Synthesis
Spors S, Wierstorf H, Raake A, Melchior F, Frank M, Zotter F (2013) Spatial sound with loud-
speakers and its perception: a review of the current state. Proc IEEE 101(9):1920–1938. https://
doi.org/10.1109/JPROC.2013.2264784
Start EW (1997) Direct sound enhancement by wave field synthesis. PhD thesis, Delft University
of Technology, Delft
Steinberg JC, Snow WB (1934a) Symposium on wire transmission of symphonic music and its
reproduction in auditory perspective. physical factors. Bell Syst Tech J XIII
Steinberg JC, Snow WB (1934b) Auditory perspective–physical factors. Electr Eng 12–17
Stirnat C, Ziemer T (2017) Spaciousness in music: the toneister’s intention and the listener’s per-
ception. In: Proceedings of the klingt gut! symposium, Hamburg
Vaananen R (2003) User interaction and authoring of 3D sound scenes in the Carrouso EU project.
In: Audio engineering society convention 114. http://www.aes.org/e-lib/browse.cfm?elib=12483
Verheijen E (1997) Sound reproduction by wave field synthesis. PhD thesis, Delft University of
Technology, Delft
Vogel P (1993) Applications of wave field synthesis in room acoustics. PhD thesis, Delft University
of Technology, Delft
Warusfel O, Misdariis N (2004) Sound source radiation syntheses: from performance to domestic
rendering. In: Audio engineering society convention 116
Wierstorf H (2014) Perceptual assessment of sound field synthesis. PhD thesis, University of Tech-
nology Berlin, Berlin
Wierstorf H, Raake A, Geier M, Spors S (2013) Perception of focused sources in wave field synthesis.
J Audio Eng Soc 61(1/2):5–16. http://www.aes.org/e-lib/browse.cfm?elib=16663
Williams EG (1999) Fourier acoustics. Sound radiation and nearfield acoustical holography. Aca-
demic Press, Cambridge
Wittek H (2007) Perceptual differences between wavefield synthesis and stereophony. PhD thesis,
University of Surrey, Guilford
Ziemer T (2009) Wave field synthesis by an octupole speaker system. In: Naveda L (ed) Proceedings
of the second international conference of students of systematic musicology (SysMus09), pp 89–
93. http://biblio.ugent.be/publication/823807/file/6824513.pdf#page=90
Ziemer T (2011a) Wave field synthesis. Theory and application. Magister thesis, University of
Hamburg
Ziemer T (2011b) A psychoacoustic approach to wave field synthesis. In: Audio engineering society
conference: 42nd international conference: semantic audio, Ilmenau, pp 191–197. http://www.
aes.org/e-lib/browse.cfm?elib=15942
Ziemer T (2011c) Psychoacoustic effects in wave field synthesis applications. In: Schneider A,
von Ruschkowski A (eds) Systematic musicology. Empirical and theoretical studies. Peter Lang,
Frankfurt am Main, pp 153–162. https://doi.org/10.3726/978-3-653-01290-3
Ziemer T (2011d) A psychoacoustic approach to wave field synthesis. J Audio Eng Soc 59(5):356.
https://www.aes.org./conferences/42/abstracts.cfm#TimZiemer
Ziemer T (2014) Sound radiation characteristics of a shakuhachi with different playing techniques.
In: Proceedings of the international symposium on musical acoustics (ISMA-14), Le Mans, pp
549–555. http://www.conforg.fr/isma2014/cdrom/data/articles/000121.pdf
Ziemer T (2015a) Exploring physical parameters explaining the apparent source width of
direct sound of musical instruments. In: Jahrestagung der Deutschen Gesellschaft für
Musikpsychologie, Oldenburg, pp 40–41. http://www.researchgate.net/publication/304496623_
Exploring_Physical_Parameters_Explaining_the_Apparent_Source_Width_of_Direct_Sound_
of_Musical_Instruments
Ziemer T (2015b) Spatial sound impression and precise localization by psychoacoustic sound field
synthesis. In: Deutsche Gesellschaft für Akustik e.V., Mores R (eds) Seminar des Fachausschusses
Musikalische Akustik (FAMA): “Musikalische Akustik zwischen Empirie und Theorie”, Ham-
burg. Deutsche Gesellsch. f. Akustik, pp 17–22. https://www.dega-akustik.de/fachausschuesse/
ma/dokumente/tagungsband-seminar-fama-2015/
References 243