Lecture 5 covers virtual reality audio and tracking technologies, highlighting the importance of sound in enhancing realism in VR experiences. It details various audio display types, sound properties, and tracking methods, including mechanical, magnetic, inertial, acoustic, and optical tracking technologies. The lecture emphasizes the need for effective tracking to ensure immersion by mapping the user's physical movements to a virtual environment.
Introduces the lecture on VR audio and tracking, recap of last week focusing on various display types and the introduction of audio displays.
Discusses audio importance in VR realism, sound capturing techniques, types of audio recordings, and properties of audio displays.
Explains audio channels, masking effects, audio displays formats like binaural, and the distinction between spatialization and localization.
Focuses on audio localization methods using HRTFs, constructions of HRTFs, and individual variations in user response to sound.
Analyzes acoustic effects in environments, sound reverberation, processing challenges, and the need for efficient sound systems.
Highlights audio display hardware advancements, including custom HRTFs and modern consumer sound cards for better audio experiences.
Explores tools for designing spatial audio in VR, achievements in projects like spatial audio design and demos in VR environments.Introducing tracking as a key element in enhancing immersion in VR, defining tracking frames and degrees of freedom.
Elaborates on key metrics for tracking performance including static and dynamic accuracy, latency, jitter, and technologies used.
Details traditional tracking methods including mechanical, magnetic, inertial, and their pros/cons, and industry examples.
Explains inertial tracker capabilities and acoustic tracking methods, including challenges faced and examples of implementations.
Discusses optical tracking technologies, hybrid systems that combine methods, and specific examples like Vive tracking.
Wraps up the presentation with essential links for additional resources and contact information.
LECTURE 5: VRAUDIO
AND TRACKING
COMP 4010 – Virtual Reality
Semester 5 – 2016
Bruce Thomas, Mark Billinghurst
University of South Australia
August 23rd 2016
Audio Displays
Definition: Computerinterfaces that provide
synthetic sound feedback to users interacting with
the virtual world.
The sound can be monoaural (both ears hear the
same sound), or binaural (each ear hears a
different sound)
Burdea, Coiffet (2003)
Motivation
• Most of thefocus in Virtual Reality is on the visuals
• GPUs continue to drive the field
• Users want more
• More realism, More complexity, More speed
• However sound can significantly enhance realism
• Example: Mood music in horror games
• Sound can provide valuable user interface feedback
• Example: Alert in training simulation
7.
Creating/Capturing Sounds
• Soundscan be captured from nature (sampled) or
synthesized computationally
• High-quality recorded sounds are
• Cheap to play
• Easy to create realism
• Expensive to store and load
• Difficult to manipulate for expressiveness
• Synthetic sounds are
• Cheap to store and load
• Easy to manipulate
• Expensive to compute before playing
• Difficult to create realism
8.
Types of AudioRecordings
• Monaural: Recording with one microphone – no positioning
• Stereo Sound: Recording with two microphones placed several
feet apart. Perceived sound position as recorded by
microphones.
• Binaural: Recording microphones embedded in a dummy
head. Audio filtered by head shape.
• 3D Sound: Using tiny microphones in the ears of a real person.
Generate HRTF based on ear shape and audio response.
9.
Synthetic Sounds
• Complexsounds can be built from simple waveforms
(e.g., sawtooth, sine) and combined using operators
• Waveform parameters (frequency, amplitude) could be
taken from motion data, such as object velocity
• Can combine wave forms in various ways
• This is what classic synthesizers do
• Works well for many non-speech sounds
Channels and Masking
• Numberof channels
• Stereo vs. mono vs. quadrophonic
• 2.1, 5.1, 7.1
• Two kinds of masking
• Louder sounds mask softer ones
• We have too many things vying for our audio attention these days!
• Physical objects mask sound signals
• Happens with speakers, but not with headphones
Spatialization vs. Localization
• Spatializationis the processing of sound signals
to make them emanate from a point in space
• This is a technical topic
• Localization is the ability of people to identify the
source position of a sound
• This is a human topic, i.e., some people are
better at it than others.
17.
Stereo Sound
• Seemsto come from inside users head
• Follows head motion as user moves head
18.
3D Spatial Sound
• Seems to be external to the head
• Fixed in space when user moves head
• Has reflected sound properties
19.
Spatialized Audio Effects
• Naïveapproach
• Simple left/right shift for lateral position
• Amplitude adjustment for distance
• Easy to produce using consumer hardware/software
• Does not give us "true" realism in sound
• No up/down or front/back cues
• We can use multiple speakers for this
• Surround the user with speakers
• Send different sound signals to each one
20.
Example: The BoomRoom
• Use surround speakers to create spatial audio effects
• Gesture based interaction
• https://www.youtube.com/watch?time_continue=54&v=6RQMOyQ3lyg
21.
Audio Localization
• Main cuesused by humans to localize sound:
1. Interaural time differences: Time difference for
sound wave to travel between ears
2. Interaural level differences: For high frequency
sounds (> 1.5 kHz), volume difference between
ears used to determine source direction
3. Spectral filtering done by outer ears: Ear shape
changes frequency heard
22.
Interaural Time Difference
• Takes fixed time to travel between ears
• Can use time difference to determine sound location
23.
Spectral Filtering
Ear shapefilters sound depending on direction it is coming from.
This change in frequency determines sound source elevation.
24.
Natural Hearing vs.Headphones
• Due to ear shape natural hearing provides different audio
response depending on sound location
25.
Head-Related Transfer Functions(HRTFs)
• A set of functions that model how sound from a
source at a known location reaches the eardrum
26.
More About HRTFs
• Functionstake into account,
• Individual ear shape
• Slope of shoulders
• Head shape
• So, each person has his/her own HRTF!
• Need to have a parameterizable HRTFs
• Some sound cards/APIs allow specifying an HRTF
Constructing HRTFs
• Small microphonesplaced into ear canals
• Subject sits in an anechoic chamber
• Can use a mannequin's head instead
• Sounds played from a large number of known
locations around the chamber
• HRTFs are constructed for this data
• Sound signal is filtered through inverse functions
to place the sound at the desired source
29.
Constructing HRTFs
• Puttingmicrophones in Manikin or human ears
• Playing sound from fixed positions
• Record response
30.
How HRTFs areUsed
• HRTF is the Fourier transform of the
in-ear microphone audio response
(head related impulse response
(HRIR))
• From HRTF we can calculate pairs
of finite impulse response (FIR)
filters for specific sound positions
• One filter per ear
• To place virtual sound at a position,
apply set of FIR filters for that
position to the incoming sound
Environmental Effects
• Sound isalso changed by objects in the environment
• Can reverberate off of reflective objects
• Can be absorbed by objects
• Can be occluded by objects
• Doppler shift
• Moving sound sources
• Need to simulate environmental audio properties
• Takes significant processing power
33.
Sound Reverberation
• Needto consider first and second order reflections
• Need to model material properties, objects in room, etc
34.
The Tough Part
• All of this takes a lot of processing
• Need to keep track of
• Multiple (possibly moving) sound sources
• Path of sounds through a dynamic environment
• Position and orientation of listener(s)
• Most sound cards only support a limited number of
spatialized sound channels
• Increasingly complex geometry increases load on
audio system as well as visuals
• That's why we fake it ;-)
• GPUs might change this too!
35.
Sound Display Hardware
• Designed to reduce CPU load
• Early Hardware
• Custom HRTF
• Crystal River Engineering Convolvotron (1988)
• Real time 3D audio localizer, 4 sound sources
• Lake Technology (2002)
• Huron 20, custom DSP hardware, $40,000
• Modern Consumer Hardware
• Uses generic HRTF
• SoundBlaster Audigy/EAX
• Aureal A3D/Vortex card
GPU Based AudioAcceleration
• Using GPU for audio physics calculations
• AMD TrueAudio Next
• https://www.youtube.com/watch?v=Z6nwYLHG8PU
38.
Audio Software SDKs
• Modern CPUs are fast enough spatial audio can be
generated without dedicated hardware
• Several 3D audio SDKs exist
• OpenAL
• www.openal.org
• Open source, cross platform
• Renders multichannel three-dimensional positional audio
• Google VR SDK
• Android, iOS, Unity
• https://developers.google.com/vr/concepts/spatial-audio
• Oculus
• https://developer3.oculus.com/documentation/audiosdk/latest/
• Microsoft DirectX, Unity, etc
39.
Google VR SpatialAudio Demo
• https://www.youtube.com/watch?v=I9zf4hCjRg0&feature=youtu.be
40.
OSSIC 3D AudioHeadphones
• 3D audio headphones
• Calibrates to user – calculates HRTF
• Integrated head tracking
• Multi-driver array providing sound to correct part of ear
• Raised $2.7 million on Kickstarter
• https://www.ossic.com/3d-audio/
Designing Spatial Audio
• There are several tools available for designing 3D audio
• E.g. Facebook Spatial Workstation
• Audio tools for cinematic VR and360 video
• https://facebook360.fb.com/spatial-workstation/
• Spatial Audio Designer
• Mixing of surround sound and 3D audio
• http://www.newaudiotechnology.com/en/products/spatial-audio-designer/
Immersion and Tracking
• Motivation:For immersion, when the user changes
position in reality the VR view also needs to change
• Requires tracking of the user’s pose (position/orientation)
in the real world and mapping to the Virtual World
48.
Definitions
• Tracking: measuring the
positionand orientation of an
object relative to a known
frame of reference
• VR Tracker: technology used
in VR to measure the real
time change in a 3D object
position and orientation
(1968) Ivan Sutherland
Mechanical Tracker
49.
• Frames ofReference
• Real World Coordinate System (Wcs)
• Head Coordinate System (Hcs)
• Eye Coordinate System (Ecs)
• Need to create a mapping between Frames
• E.g. Transformation from Wcs to Hcs to Ecs
• Movement in real world maps to movement in Ecs frame
Frames of Reference
50.
Example Frames ofReference
Assuming Head Tracker
mounted on HMD
Assuming tracking relative to
fixed table object
51.
Tracking Degrees ofFreedom
• Typically 6 Degrees of Freedom (DOF)
• Rotation or Translation about an Axis
1. Moving up and down
2. Moving left and right
3. Moving forward and backward
4. Tilting forward and backward (pitching);
5. Turning left and right (yawing);
6. Tilting side to side (rolling).
52.
Key Tracking PerformanceCriteria
• Static Accuracy
• Dynamic Accuracy
• Latency
• Update Rate
• Tracking Jitter
• Signal to Noise Ratio
• Tracking Drift
53.
Static vs. DynamicAccuracy
• Static Accuracy
• Ability of tracker to determine
coordinates of a position in space
• Depends on sensor sensitivity, errors
(algorithm, operator), environment
• Dynamic Accuracy
• System accuracy as sensor moves
• Depends on static accuracy
• Resolution
• Minimum change sensor can detect
• Repeatability
• Same input giving same output
54.
Tracker Latency, UpdateRate
• Latency: Time between change
in object pose and time sensor
detects the change
• Large latency (> 10 ms) can cause
simulator sickness
• Larger latency (> 50 ms) can
reduce VR immersion
• Update Rate: Number of
measurements per second
• Typically > 30 Hz
55.
Tracker Jitter, Signalto Noise Ratio
• Jitter: Change in tracker output
when tracked object is stationary
• Range of change is sensor noise
• Tracker with no jitter reports constant
value if tracked object stationary
• Makes tracker data changing
randomly about average value
• Signal to Noise Ratio: Signal in
data relative to noise
• Found from calculating mean of
samples in known positions
56.
Tracker Drift
• Drift:Steady increase in
tracker error over time
• Accumulative (additive) error
over time
• Relative to Dynamic sensitivity
over time
• Controlled by periodically
recalibration (zeroing)
Example: Fake SpaceBoom
• BOOM (Binocular Omni-Orientation Monitor)
• Counterbalanced arm with 100
o
FOV HMD mounted on it
• 6 DOF, 4mm position accuracy, 300Hz sampling, < 5 ms latency
60.
Demo: Fake SpaceTele Presence
• Using Boom with HMD to control robot view
• https://www.youtube.com/watch?v=QpTQTu7A6SI
61.
MagneticTracker
• Idea: Measure differencein current between a
magnetic transmitter and a receiver
• ++: 6DOF, robust, accurate, no line of sight needed
• -- : limted range, sensible to metal, noisy, expensive
Flock of Birds (Ascension)
62.
Example: Polhemus Fastrak
• Degrees-of-Freedom: 6DOF
• Number of Sensors: 1-4
• Latency: 4ms
• Update Rate: 120 Hz/(num sensors)
• Static Accuracy Position: 0.03in RMS
• Static Accuracy Orientation: 0.15° RMS
• Range from Standard Source: Up to 5 feet or 1.52 meters
• Extended Range Source: Up to 15 feet or 4.6 meters
• Interface RS-232 or USB (both included)
• Host OS compatability GUI/API Toolkit 2000/XP
• http://polhemus.com/motion-tracking/all-trackers/fastrak
Example: Razer Hydra
• Developed by Sixense
• Magnetic source + 2 wired controllers
• Short range (< 1 m), Precision of 1mm and 1
o
• 62Hz sampling rate, < 50 ms latency
• $600 USD
Types of InertialTrackers
• Gyroscopes
• The rate of change in object orientation or angular
velocity is measured.
• Accelerometers
• Measure acceleration.
• Can be used to determine object position, if the starting
point is known.
• Inclinometer
• Measures inclination, ”level” position.
• Like carpenter’s level, but giving electrical signal.
69.
Example: MEMS Sensor
• Uses spring-supported load
• Reacts to gravity and inertia
• Changes its electrical parameters
• < 5 ms latency, 0.01
o
accuracy
• up to 1000Hz sampling
• Problems
• Rapidly accumulating errors.
• Error in position increases with the square of time.
• Cheap units can get position drift of 4 cm in 2 seconds.
• Expensive units have same error in 200 seconds.
• Not good for measuring location
• Need to periodically reset the output
MEMS Gyro BiasDrift
• Zero reading of MEMS Gyro drifts over time due to noise
72.
Example: iPhone Sensors
• Three-axis accelerometer
• Gives direction acceleration -
affected by gravity and movement
• Three-axis gyroscope
• Measures translation and rotation
moment – affected by movement
• Three axis magnetometer
• Gives (approximate) direction of
magnetic north
• GPS
• Gives geolocation – multiple
samples over time can be used to
detect direction and speed
iPhone Sensor Monitor app
73.
Acoustic - UltrasonicsTracker
• Idea:Timeof Flight or Phase-Coherence Sound Waves
• ++: Small, Cheap
• -- : 3DOF, Line of Sight, Low resolution, Affected by
Environment (pressure, temperature), Low sampling rate
Ultrasonic
Logitech IS600
74.
Acoustic Tracking Methods
• Two approaches:
• Time difference,
• Phase difference
• Time-of-flight (TOF):
• All current commercial systems
• Time that sound pulse travels is proportional to distance from the receiver.
• Problem: differentiating the pulse from noise.
• Each transmitter works sequentially – increased latency.
• Phase coherent approach (Sutherland 1968):
• No pulse, but continuous signal (~50 kHz)
• Many transmitters on different frequencies
• Sent and received signal phase differences give continuously the change
in distance, no latency,
• Only relative distance, cumulative & multi-path errors possible.
75.
Acoustic Tracking Principles
• Measurements are based on triangulation
• Minimum distances at transmitter and receiver required.
• Can be a problem if trying to make the receiver very small.
• Each speaker is activated in cycle and 3 distances from it
to the 3 microphones are calculated, 9 distances total.
• Tracking performance can degrade when operating in a
noisy environment.
• Update rate about 50 datasets/s
• Time multiplexing is possible
• With 4 receivers, update rate drops to 12 datasets/s
76.
Example: Logitech HeadTracker
• Transmitter is a set of three ultrasonic
speakers - 30cm from each other
• Rigid and fixed triangular frame
• 50 Hz update, 30 ms latency
• Receiver is a set of three microphones
Placed at the top of the HMD
• May be part of 3D mice, stereo glasses, or
other interface devices
• Range typically about 1.5 m
• Direct line of sight required
• Accuracy 0.1
o
orientation, 2% distance
77.
OpticalTracker
• Idea: Image Processingand ComputerVision
• Specialized
• Infrared, Retro-Reflective, Stereoscopic
• ++: Long range, cheap, immune to metal
• -- : Line of Sight,Visual Targets, Low Sampling rate
ART Hi-Ball
Example:HiBallTracking System (3rdTech)
• Inside-Out Tracker
• $50K USD
• Scalable over large area
• Fast update (2000Hz)
• Latency Less than 1 ms.
• Accurate
• Position 0.4mm RMS
• Orientation 0.02° RMS
82.
Example: Microsoft Kinect
• Outside-in tracking
• Components:
• RGB camera
• Range camera
• IR light source
• Multi-array microphone
• Specifications
• Range 1-6m
• Update rate 30Hz
• Latency 100ms
• Tracking resolution < 5mm
• Range Camera extracts depth information
and combines it with a video signal
83.
Hybrid Tracking
• Idea:Multiple technologies overcome limitations of each one
• A system that utilizes two or more position/orientation
measurement technologies (e.g. inertial + vision)
• ++: Robust, reduce latency, increase accuracy
• -- : More complex, expensive
Intersense IS-900
Ascension Laser Bird
84.
Example: Intersense IS-900
• Inertial Ultrasonic Hybrid tracking
• Use ultrasonic strips for position sensing
• Intertial sensing for orientation
• Sensor fusion to combine together
• Specifications
• Latency 4ms
• Update 180 Hz
• Resolution 0.75mm, 0.05
o
• Accuracy 3mm, 0.25
o
• Up to 140m2 tracking volume
• http://www.intersense.com/pages/20/14
Example: Vive LighthouseTracking
• Outside-in hybrid tracking system
• 2 base stations
• Each with 2 laser scanners, LED array
• Headworn/handheld sensors
• 37 photo-sensors in HMD, 17 in hand
• Additional IMU sensors (500 Hz)
• Performance
• Tracking server fuses sensor samples
• Sampling rate 250 Hz, 4 ms latency
• 2mm RMS tracking accuracy
• Large area - 5 x 5m range
• See http://doc-ok.org/?p=1478
87.
Lighthouse Components
• sd
Basestation
- IR LED array
- 2 x scanned lasers
Head Mounted Display
- 37 photo sensors
- 9 axis IMU
How Lighthouse TrackingWorks
• Position tracking using IMU
• 500 Hz sampling
• But drifts over time
• Drift correction using optical tracking
• IR synchronization pulse (60 Hz)
• Laser sweep between pulses
• Photo-sensors recognize sync pulse, measure time to laser
• Know when sensor hit and which sensor hit
• Calculate position of sensor relative to base station
• Use 2 base stations to calculate pose
• Use IMU sensor data between pulses (500Hz)
• See http://xinreality.com/wiki/Lighthouse
90.
Lighthouse Tracking
Base stationscanning
https://www.youtube.com/watch?v=avBt_P0wg_Y
https://www.youtube.com/watch?v=oqPaaMR4kY4
Room tracking