KEMBAR78
Phoenix LiDAR Systems Post Processing Workflow 20190107 | PDF | Lidar | Raw Image Format
100% found this document useful (1 vote)
417 views24 pages

Phoenix LiDAR Systems Post Processing Workflow 20190107

Uploaded by

Palash Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
417 views24 pages

Phoenix LiDAR Systems Post Processing Workflow 20190107

Uploaded by

Palash Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

1

 
 
 
 
 
 

LiDAR Mapping Systems 


Post Processing - Workflow with Inertial Explorer and TerraSolid 
Revision Date: 20190107 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Phoenix LiDAR Systems 
10131 National Blvd. 
Los Angeles, CA 90034 
 
www.phoenixlidar.com 
+1.323.577.3366 
support@phoenixlidar.com 
2

Legal Notices 3

Disclaimer 3

1. Introduction 4

2. File Setup and Data Preparation 5


2.1 Data Collection and Consolidation 5
2.1.1 Download Data from Rover Nav Box 5
2.1.2 Download GNSS observations from GNSS Reference Station 5
2.1.3 Download Images from Camera SD Card (if applicable) 5
2.2 Data Conversion and Refinement 5
2.2.1 Convert RAW images to JPG 5
2.2.2 Convert RXP to SDCX with Spatial Explorer 5
2.2.3 Extract Panoramic images from Ladybug5 5

3. Inertial Explorer Trajectory Processing 6


3.1 Prepare Reference Station Data 6
3.1.1 Convert Raw to RINEX 6
3.1.2 Refining Reference Station Coordinates 6
3.2 Project Wizard 6
3.2.1 Rover Data 6
3.2.2 Base Station Data 7
3.3 Process GNSS 9
3.4 Process Coupled Trajectory 9

4. SpatialExplorer (version 4.0.X) 10


4.1 Opening Data 10
4.1.1 Phoenix Project File 10
4.1.2 Trajectory Data 10
4.2 Produce LAS/LAZ File 11
4.2.1 Processing Intervals 11
4.2.2 Finalize Sensor Parameters 11
4.2.3 Export 11
4.3 Produce Image Timing List 12

5. Microstation and TerraSolid 13


5.1 TerraScan/Match Setup 13
5.1.1 Define Project 13
5.1.2 Importing Points into Project 13
5.1.3 Manage Trajectories 14
5.1.4 Prepare points for calibration 14
5.2 Calibration through TerraScan and TerraMatch 15
5.2.1 Global Corrections 15
5.2.1.1 Multi-Laser Corrections 17
5.2.2 Flightline Optimization 17
5.2.2.1 Flightlines 17
3

5.2.2.2 Fluctuating Corrections 17


5.3 TerraPhoto: Orthomosaic Creation and RGB Extraction 18
5.3.1 TerraPhoto Setup 18
5.3.2 Tie Point Adjustments 18
5.3.3 Color Points 19
5.3.3.1 Color Balance 20
5.3.3.2 Seamlines 20
5.3.3.3 Seamlines for Mobile Applications and RGB extraction 20
5.3.4 Orthomosaic Output 20
5.3.5 RGB Extraction 20
5.4 Ground Control and Assessing Absolute Accuracies 21

6. Mobile Data Processing Tips 22


6.0.1 Create Mask for Panoramic Image RGB Extraction 22

7. Frequently Asked Questions 23

Legal Notices 
All the features, functionality, and other product specifications are subject to change without prior notice or obligation. 
Information contained herein is subject to change without notice. 
 
Please read carefully and visit our website, w​ ww.phoenixlidar.com​ for further information and support. 
 
NOTE:​ This manual provides a description and function of the base post-processing workflow for trajectory and LiDAR 
mapping calibration. The product you purchased may not support certain functions dedicated to specific models, upgrades 
or customizations. 

Disclaimer  
Information in this document is provided in connection with Phoenix LiDAR Systems products. No license, expressed or 
implied, by estoppels or otherwise, to any intellectual property rights is granted by this document. Except as provided in the 
terms and conditions of sale for such products, Phoenix LiDAR Systems assumes no liability whatsoever, disclaims any 
express or implied warranty, relating to sale and/or use of products including liability or warranties relating to fitness for a 
particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property right. 
 
Phoenix LiDAR Systems products are not intended for use in medical, life saving, life sustaining, critical control or safety 
systems, or in nuclear facility applications. In no event shall Phoenix LiDAR Systems liability exceed the price paid for the 
product from direct, indirect, special, incidental, or consequential damages resulting from the use of the product, its 
accompanying software, or its documentation. Phoenix LiDAR Systems makes no warranty or representation, expressed, 
implied, or statutory, with respect to its products or the contents or use of this documentation and all accompanying 
software, and specifically disclaims its quality, performance, merchantability, or fitness for any particular purpose. Phoenix 
LiDAR Systems reserves the right to revise or update its products, software, or documentation without obligation to notify 
any individual or entity. Backup collected data periodically to avoid any potential data loss. Phoenix LiDAR Systems 
disclaims any responsibility of all sorts of data loss or recovery. 
 
4

1. Introduction 
This document covers the essential steps for processing LiDAR and camera data collected with a Phoenix LiDAR Systems 
mapping system. Be aware that this document should be used as a supplemental outline designed to help users follow-along 
and perform the data post-processing steps in the correct order. This document is not intended to replace customer training 
or software specific User’s Manuals. Instead, it should serve as introductory material for new users and a reference tool for 
experienced personnel. 
 
The majority of the workflow outlined in this document has been generalized, however a few mapping system specific key 
points have been highlighted with distinct colors where relevant: 
 
Key Point  Color 

Multi-Laser System Specifics  Green 

Riegl System Specifics  Orange 

Mobile System Specifics  Blue 

   
5

2. File Setup and Data Preparation 


Before we post-process the data, we need to gather and prepare a few essential components. 
● Raw data from all sensors (LiDAR, camera, etc.) 
● Field notes and other supplemental information that can be used to understand the project and data quality. 

2.1 Data Collection and Consolidation 

2.1.1 Download Data from Rover Nav Box 


To download the folders containing the mission data, you will need to access the rover’s \​ logs​ directory. Connect an 
ethernet cable from rover to your computer. If you set a static IP address on your computer, you can immediately connect to 
\\192.168.200.10\logs​ with the username ​“phoenix” (​ no quotes) and password ​“aeriallidar”​ (no quotes). Alternatively, if 
you have configured the ​hosts​ file you can connect to \​ \rover-wire\logs​. Same username and password. 

2.1.2 Download GNSS observations from GNSS Reference Station 


The method of downloading the GNSS observation logs may vary from GNSS reference station. The most common way to 
download the observation logs is through a USB connection established between laptop and GNSS reference station. Just 
make sure to have the correct GNSS observation logs corresponding to the time and date of your mission. 

2.1.3 Download Images from Camera SD Card (if applicable) 


When using RAW images taken with a digital camera, you must manually download them from the digital camera’s SD card 
onto your computer. Raw images from Basler cameras are automatically decoded and placed in the corresponding folder 
within the mission data folder. 

2.2 Data Conversion and Refinement 

2.2.1 Convert RAW images to JPG 


When using RAW images taken with a digital camera, you must convert them to JPG. Raw images from Basler cameras are 
automatically decoded. We recommend using ​Sony’s Image Data Converter​ to convert RAW images to JPG. You must place 
the converted JPG images in the camX/ folder of the mission data folder. This will ensure that Spatial Explorer is able to 
properly load the images. 

2.2.2 Convert RXP to SDCX with Spatial Explorer 


In Spatial Explorer, load your .plp file to reference the raw LiDAR data (.rxp). Under ‘Tools’ -> ‘LiDAR’ -> ‘Convert Reigl .rxp to 
.sdcx’ and use default settings (MTA zone of 0, reflection: -127db) to create the .sdcx LiDAR file. This speeds visualization 
and processing in Spatial Explorer. 

2.2.3 Extract Panoramic images from Ladybug5 


If you have captured image data with the Ladybug5 camera, you will need to extract the panoramic images at the highest 
resolution available and with the most rigorous stitching method using LadybugCapPro. 
6

3. Inertial Explorer Trajectory Processing 


Refining the position and orientation of the system through differential corrections and coupling of GNSS and IMU data will 
increase the accuracy the data from LiDAR and other sensors. 

3.1 Prepare Reference Station Data 

3.1.1 Convert Raw to RINEX 


● Use  CHC  RINEX  Converter  to  convert  “.HCN”  files  to RINEX files. Simply drag and drop an “.HCN” file into the RINEX 
Converter  and  press  the  green  play  button.  This  will  generate  a  “RINEX”  folder  in  the  same  location  as  the  “.HCN” 
file, and populate the folder with the various types of RINEX files (observation, navigation, etc.) 
 

Figure 3.1 CHC RINEX Converter 

3.1.2 Refining Reference Station Coordinates 


● There are 3 viable options for refining a reference station coordinate.  
These options are ranked from 1-Best to 3-Worst: 
○ Option 1: Setup reference station over a previously surveyed or known coordinate 
■ Using an established project monument is the easiest way to tie into your project datum 
○ Option 2: Use OPUS, or similar alternative, to post process your reference station coordinate 
■ For  North America, the National Geodetic Survey (NGS) provides the Online Positioning User Service 
(OPUS) as an industry standard for post processing static GNSS observation data:  
https://www.ngs.noaa.gov/OPUS/ 
○ Option 3: Utilize Precise Point Positioning (PPP) in IE to derive reference station coordinate

3.2 Project Wizard


Start Inertial Explorer and use the project wizard to input your data into the software 

3.2.1 Rover Data 


● Input the “.nav” file as Remote (Rover) Data “GNSS Data File” 
○ The IMU data file path will automatically populate since the “.nav” file contains GNSS and IMU 
information. 
● Specify your Rover antennae(s) model and height 
7

○ For Phoenix LiDAR Systems’ custom GNSS antenna select a “Generic” profile with a measured height of 
0.00m, measured to L1 Phase Centre. 
 

Figure 3.2 PLS custom rover antenna settings 


 
This custom antenna does not have an antenna profile loaded in Inertial Explorer. Therefore, to accurately 
process the rover trajectory we need all baselines from the Base reference station measured to the rover 
GNSS Phase Center. Note that Antenna Reference Point (ARP) to phase center measurements vary by 
antenna model. 
 
The ARP to Phase Center offset for Phoenix LiDAR Systems custom GNSS antenna is 0 ​ .043m​, and has 
already been accounted for by the GNSS lever arms in SpatialExplorer during acquisition. See section ​7 
for more information on the difference between GNSS ARPs and phase centers. 
 

 
Figure 3.3 Phase center offset (PCO) for Phoenix LiDAR Antenna from ARP to L1 phase center 
 

3.2.2 Base Station Data 


● Add the raw base station file(s) 
○ Edit the base station details (Very important!) 
■ Specify the Antenna model, measured height, and if that height is referencing ARP or L1 Phase 
Center. 
8

■ Input latitude, longitude, ellipsoidal height, datum and epoch based on the refined coordinates 
that were determined in section 3​ .2.1 
 
NOTE:​ Inertial Explorer will perform a 7 parameter coordinate transformation from your input 
datum to your processing datum, but will not account for a time dependent transformation 
between varying epochs. Therefore, you must input your initial reference station coordinate using 
the appropriate time-dependent coordinate that corresponds to the desired processing epoch. 
 
RESOURCES: 
1) NOAA’s tool for transforming positional coordinates across time and between spatial 
reference frames: h​ ttps://www.ngs.noaa.gov/cgi-bin/HTDP/htdp.prl 
2) NOAA’s Horizontal Time-Dependent Positioning Toolbox: 
https://www.ngs.noaa.gov/TOOLS/Htdp/Htdp.shtml 
■ OPTION:​ Calculate the reference station coordinates using PPP at this stage only if there is not a 
better alternative. 
 
● OPTION:​ Add base stations from download 
○ Publically available reference station can be utilized for projects without static data from a dedicated 
base 

Figure 3.4 Download publically available reference station data from within IE 
 
If you are using CORS data, keep in mind that all coordinates from CORS sites are in NAD83 (2011) epoch 
2010.000. If you are processing data using a CORS station, and your desired processing epoch is 
9

anything other than epoch 2010.000, you will need to transform the initial positional coordinate for the 
input reference station 
 
NOTE:​ As your baseline increases, accuracy decreases. Due to the nature of RTK/PPK, accuracy degrades 
linearly at a rate of 1 mm for each km between the base and rover. For this reason, the best trajectory 
results are produced when using a base station located on site. 
 
○ If PPP is needed, first input the reference station  

3.3 Process GNSS


● Once finished with the Project Wizard, ‘Process GNSS’ 
○ Select the processing/output datum for your project. 
○ Specify the platform type 

3.4 Process Coupled Trajectory


● Process Loosely (LC) and/or Tightly (TC) Coupled to refine the trajectory solution to +/- 0-2 cm in Combined 
Position separation, and +/- 0-2 arcmin Attitude separation graphs for the time period of data acquisition 
● Optional: ​If your solution is < +/- 4 arcmin Attitude separation, you can use the ‘solve lever arm’ function in LC to 
refine\estimate the IMU to GNSS antenna lever arms. Calculate itteravely to refine these values. Once they have 
coalesced, process your LC/TC again to include the new lever arm values during trajectory processing.  
Note:​ Use with caution, this option is best suited for verification and is not a replacement for precise 
measurement of the integrated hardware. 
● Output SBET and SMRMSG files with names specifying the exact mission, processing datum, and GPS week 
○ SBET will be based on either LC or TC, whichever results are active in IE during output 
*When outputting an SBET/SMRMSG, the default input file will always be the TC solution (if you 
processed both LC and TC), therefore if you want to output an LC solution, be sure to verify your input file 
is correct (“.cls” = LC and “.cts” = TC). 
● Output final graphs by outputting an HTML report 
*Note: Everytime you output an HTML report it writes to an “HTML” folder. If you want to output a separate report 
for your LC and TC solution be sure to change the name of the “HTML” folder before outputting the second 
solution.  
● Optional: Output a KMZ of your trajectory to reference with Google Earth or to send to your client 

   
10

4. SpatialExplorer (version 4.0.X) 


I​n Offline mode SpatialExplorer is used to export raw range data to common point cloud formats, fuse camera data, apply a 
variety of filters, and project using customizable spatial reference system definitions. 

4.1 Opening Data 

4.1.1 Phoenix Project File 


● Open the *.plp file to load raw data, acquisition settings and sensor configurations into SpatialExplorer.

4.1.2 Trajectory Data 


SpatialExlporer allows you load a variety trajectory file in order to process data. SpatialExplorer supports the following 
trajectory file formats: 
 
Unprocessed, low-rate real-time trajectory data computed during scanning. This trajectory is 
.nav 
automatically loaded with .plp 

RIEGL’’s post-processed high-rate trajectory data that contains absolute time. Manually exported from a 
.POF/.POQ 
NovAtel Inertial Explorer loosely-coupled or tightly-coupled trajectory. 

SBET (​Sm
​ oothed ​Be​ st E​ ​stimate of a ​Tr​ ajectory) high-rate trajectory data. Manually exported from a 
loosely-coupled or tightly-coupled trajectory processed by NovAtel Inertial Explorer. Only contains 
.OUT 
Time-of-Week information, therefore the correct GPS week and time system have to be specified when 
opening the file. 

C​ombined L​ ​oosely-coupled S
​ ​moothed trajectory / C
​ ​ombined T
​ ​ightly-coupled S
​ ​moothed trajectory.. 
Native high-rate post-processed trajectory created by NovAtel Inertial Explorer starting with version 8.70. 
.cls/.cts 
Contains absolute time and does not lead to ambiguities as to which solution (loosely-coupled or 
tightly-coupled) the file was exported from. 
 
● Open the post processed trajectory that was refined and exported from InertialExplorer. 
○ File->Open File-> navigate to either: 
■ SBET and associated SMRMSG files and specify the GPS week of acquisition. 
■ Or, .cls/.cts using caution to select the intended solution 
○ Multiple trajectory files maybe be open simultaneously. Click on a trajectory name in the “Navigation” 
section of the layers widget then use the green check mark to select it as the source for use during 
processing 
 

Figure 4.1 Selecting a navigation source 


11

4.2 Produce LAS/LAZ File 

4.2.1 Processing Intervals 


Time intervals can be set to filter out data from turns, take off/landing, and other off flightline segments of the trajectory 
during export. There are 2 methods for creating processing intervals in SpatialExplorer: 
● With the desired Navigation source highlighted in the Project layers widget, click the Split Trajectory tool (Scissors 
icon). 
○ Utilize the available parameters to automatically search for straight flightlines to use as processing 
intervals 
 

Figure 4.2 
 
● Or​, Use the Positions Measurement tool to mark start and stop locations for each interval you would like to create 
along the trajectory.  
○ Navigate to the Intervals layer of the Project widget and click the Eyedropper icon to sample the 
measurement you have just placed. 
○ Assign a name to your intervals. 

4.2.2 Finalize Sensor Parameters 


At this point, you can choose to refine the final processing parameters for each of your sensors (LiDAR, camera, etc). 
● The configuration for each sensor can be accessed from the Project window by clicking on the wrench icon 
 

Figure 4.3 
 
● You can configure the numerous processing or calibration parameters associated with the sensors or cameras from 
this interface. 

4.2.3 Export 
Navigate to the file menu and open the Export to LAS dialog 
12

● Specify and output directory and give a name to the file 


○ It is recommended to include the coordinate system and datum in the name of your export 
● Select the flightline processing intervals from the dropdown list 
● Specify the desired output coordinate system 
○ SpatialExplorer defaults to the projects local UTM zone 
○ Use the Radio button to choose a custom coordinate system 
■ Keep in mind that at this point your trajectory is based on the Processing Datum that was set in IE. 
SpatialExplorer assumes that trajectories are referenced to WGS84 and by default the coordinate 
system library does not transform between WGS84-NAD83. Thus, setting your desired datum in IE is 
important. Other datums may require special attention. 
■ User defined/edited proj4 strings can be saved and utilized to implement custom definitions. For 
more information on creating custom definitions see:  
www.proj4.org 
● Specify the time format. 
○ If you are going to be post-processing the LAS/LAZ file in TerraSolid, you must ensure the Time Format 
selected is T
​ ime of Week (seconds)​. Seconds of the week allows easily matching the point cloud records 
with the time in the SBET when calibrating in TerraSolid. 
● Set the ​Point Source ID 
○ If using a multi-laser LiDAR sensor, make sure to set the Point Source ID is set to “Index of Laser Head 
(Multilaser).” The Point Source ID option allows you to associate each point in a cloud with its 
corresponding laser of origin. Much like each the X, Y, and Z coordinate values or the R/G/B values, the 
Point Source ID is a property of every point in a LAS/LAZ file. 
● Click Ok to begin exporting your pointcloud 

4.3 Produce Image Timing List 


In order to use the captured images in Terraphoto, you will need to produce an image timing list.  
● Navigate to the configuration dialog(s) for your project camera(s). 
● Select the Calibration tab and locate the “Tools” button near the bottom.  
● Click the “Tools” button and select the “Export TerraPhoto image timing” option. 

   
13

5. Microstation and TerraSolid 


To load the Terrasolid suite you will need to launch Microstation and load the TerraSolid applications through the MDL 
applications​ tool. 

5.1 TerraScan/Match Setup 

5.1.1 Define Project 


To create a new project, you need to open “Define Project” from the main toolbar. 

Figure 5.1: Creating a Project 


 
● Ensure the “color” checkbox is selected in the “Attributes” dialog. 
● Utilize “.fbi” format instead of .las/laz as the intermediate storage format in mobile projects 
○ Enable the storage of “Normal vector” and “Image numbers” 
● You will need to create paths to a “Separate directory” for your project points and for your trajectories. 
● Make sure to define the block size based on the point density of your project (typically 100m - 300m). 
● Once you are done, save your project.

5.1.2 Importing Points into Project 


● Before reading any points, ensure that the TerraScan “Scan Define Coordinate Setup” origin and resolution is set 
appropriately for the coordinates of your data. 
● In the TerraScan project window: File -> ‘Import Points into Project’ 
○ Leave everything default except: 
■ Multi-laser system (Velodyne) Scanner: from line number 
14

5.1.3 Manage Trajectories 


Open “Manage Trajectories” from the TerraScan main toolbar. 
● Click on “Import files” and find the SBET you created in Inertial Explorer. 
● Ensure the same coordinate system is applied to the trajectory as you applied to the .las/.laz in Spatial Fuser. 
● ‘Import accuracy file’ and find your SMRMSG file to apply to the SBET 
● Isolate individual flightlines 
○ Use the “Split at laser gaps” function for the Tools dropdown to automatically detect the processing 
intervals that were set in SpatialExplorer. 
○ Alternatively​, interactively split flightlines if processing intervals were not set in SpatialExplorer 
■ Tools -> ‘Draw into Design’ to show the trajectory in view 1 
■ Use the ‘Split’ tool to cut the turns out of the trajectory and isolate individual flightlines. 
■ Iteratively delete the turns from both the display and the trajectory dialog until you are left with 
only straight isolated flightlines. 
 

Figure 5.2: Individual, Straight Flightlines 

5.1.4 Prepare points for calibration 


With points imported and flightlines split you will need to prepare the project for calibration 
● Run a macro to “Deduce line numbers” for the project points to ensure that they match the trajectory numbers, 
and “Delete line” 0 to remove turn data if trajectories were manually split. 
○ Write over original 
● In project window -> Boundaries -> ‘Draw boundaries’ 
○ Label: Full File Name (optional) 
● Run a grounding macro and write the new project points to a ‘Grounded’ folder 
○ Process lines separately 
○ Process Scanners separately 
○ Compute normals and classify buildings/wall for mobile data calibration 
○ Use at least 5m neighbors (more with large buildings/trees) 
15

● Edit the project information to point the directory to the ‘Grounded’ folder 
○ Save the project 

5.2 Calibration through TerraScan and TerraMatch 


Here we will use the functions in TerraScan and TerraMatch to create tie lines and solve for global, flightline, and fluctuating 
corrections to maximize the relative accuracy. 
● In the TerraMatch toolbar, run Measure Match to assess relative accuracy 
○ Save the report as a baseline 
● Visually spot check the data 

5.2.1 Global Corrections 


● In TerraMatch toolbar - ‘Define Tie Lines’ 
○ Set Max error of xy and z based on your initial offsets 
■ This controls how far apart flightlines can be in order to make observations between them 
○ Multi-laser System Check ‘Separate scanners’ 
○ Mobile projects specify the Wall/Building class for vertical tie line observations 
 

Figure 5.3: Define Tie Lines Settings Dialog Example 


 
● In the next window: File -> Search Tie Lines 
○ Edit the parameters of the next dialog based on the type of dataset 
16

○ Tip: iteratively try different settings to optimize the number of tie lines found 
 

Figure 5.4: Search Tie Lines Dialog (Example from Mobile Mission) 
 
● Save the tie lines found 
● In the TerraMatch toolbar -> Find Tie Line Match 
○ Solve for whole dataset and solve for Heading, Roll, and Pitch only 

Figure 5.5: Tie Line Match Dialog (Mobile Applications use “Generic” System) 
○ Save the corrections found as a .tms and save the report .txt 
● Run ‘Apply Corrections’ from the TerraMatch toolbar 
○ Use corrections from above and save to a new folder 
● In the ‘Define Project’ window, set the .las directory to your new folder of corrected .las 
○ Save the project 
● Run Measure Match with the same setting as previously 
○ Save the report 
17

5.2.1.1 Multi-Laser Corrections 

● Go through the Define Tie line dialog and search tie lines as you did previously 
● Find Tie Line Match same as before, but with only ‘mirror scale’ checked and a solution per scanner 
○ Save corrections and report and ‘apply corrections’ to a new folder 
● Change the directory in your ‘define project’ to this new folder of corrections 
○ Save the project 
● Run Measure Match with the same settings as before, save report 
● Use Define Tie Lines again and search and save a new set of tie lines 
● Find Tie Line Match same as before but with Roll, Pitch, Heading checked and a solution per scanner 
○ Save corrections and report and ‘apply corrections’ to a new folder 
● Change the directory in your ‘define project’ to this new folder of corrections 
○ Save the project 
● Run Measure Match with the same settings as before, save report 

5.2.2 Flightline Optimization 

5.2.2.1 Flightlines 

● Use Define Tie Lines again and search and save a new set of tie lines 
● Find Tie Line Match and use ‘Individual Lines’ instead of ‘Whole Data Set’ to find individual flightline corrections 
○ Save the corrections and report and use ‘Apply Corrections’ to create a new set of corrected .las 
● Change the directory in your ‘define project’ to this new folder of corrections 
○ Save the project 
● Run Measure Match with the same settings as before 

5.2.2.2 Fluctuating Corrections 

● Use Define Tie Lines again and search and save a new set of tie lines 
● Use the “Find Tie Line Fluctuations” tool to find fluctuation corrections for the dataset 
○ Only solve for Z with a smooth curve and 2*trajectory accuracy 

Figure 5.6 Find Tie Line Fluctuations 


● Save the corrections found and the report 
18

● Use ‘Apply Corrections’ to apply the fluctuations corrections and write a new set of .las 
● Edit the Project information to point to this new folder and save your project 
● Run Measure Match with the same settings as previously and save you report 
Now you data has been calibrated and is ready for further classification or creation of derivative products. 

5.3 TerraPhoto: Orthomosaic Creation and RGB Extraction 


Make sure TerraPhoto is loaded through the MDL applications in Microstation. 

5.3.1 TerraPhoto Setup 


● In the TerraScan main window, read in all of the ground points for your project area. This will provide TerraPhoto 
a ground surface to properly rectify and mosaic the imagery 
● In the TerraPhoto toolbar, find ‘manage camera trajectories’ and point the directory to the same folder containing 
the trimmed trajectories you used with TerraScan and Match. 
● In TerraPhoto -> Create a New Mission 
○ Create directories for temp, ortho, and mosaic outputs 
○ Set up a mission camera 
■ Use the most up to date .cal file possible for your specific camera 
■ Positions=normal, naming=eagle eye, format=.jpg 
■ Specify the image directory of JPGs 
● In TerraPhoto -> Points -> load from TerraScan, specify the ground class ​*Not necessary for Mobile applications* 
● In TerraPhoto -> Images -> Compute list. Use the image timing list .txt created from SpatialExplorer 
● In TerraPhoto -> Utilities -> Create Thumbnails, Shadow maps, and Depth maps 
● Optionally​, With the TerraMatch “Apply Corrections” tool, apply the flightline, and fluctuation corrections used on 
the point cloud to the image timing list to ensure the two data coincide 
● Spot check the initial image accuracy 
○ For aerial data In TerraPhoto -> Rectify -> Color Points. This creates an initial orthomosaic to view 
■ Assess seamlines and radiometric consistency 
● Check for image misalignment caused by image-timing mismatch 
● Decide if the camera needs additional calibration 
○ For Mobile Ladybug Systems: In the TerraPhoto main window -> View -> Camera View 
■ In view 7 (or whichever view was specified in ‘Camera View’) look at a particular image with the 
LiDAR points loaded and displayed in the same area to assess how well the imagery coincides 
with the LiDAR data. If you see slight offsets between the two data go through camera 
calibration refinement below. If you see large differences, check the image timing list and your 
raw photos for obvious skipped events and recongsile in your image timing list 

5.3.2 Tie Point Adjustments 


● For aerial missions in TerraPhoto -> Image -> Define Tie Points 
○ Cycle through the image list that is created and find an image with well defined, identifiable features on 
the ground surface (not above ground objects). 
○ Using one image, use ‘Point’ → ‘Add Ground’ to select a feature in view 1, then the same refined location 
in view 3. 
○ In view 4 a series of other images that contain the same feature are shown. Select the same location on 
19

the feature as you did in view 2. 


○ Quality is paramount over quantity, although you need at least 5 points to provide a decent initial camera 
calibration. 
○ If you can find an image with minimal above ground features, you can try automated tie points using File 
-> Search Tie Points. With this method be diligent to delete points that are on above ground features. 

Figure 5.8: Example of automatic image tie points 


● If using a mobile system with a Ladybug camera, use these steps for Image Tie Points: 
○ Cycle through the image list created in this dialog and find an image with well defined, identifiable 
features on vertical surfaces near the drive lines that can be seen from multiple angles. 
○ Using one image, use ‘Point’ -> ‘Add Air Point’ to select a feature in view 1, then the same refined location 
in view 2. 
○ In View 3, a series of other images that contain the same feature are shown. Select the same location as 
in view 2. 
○ Quality is paramount over quantity, although you’ll need at least 5 points to provide a decent camera 
calibration. 
● Save these tie points and go to File -> Output Report 
○ Note the starting and final misalignment values at the bottom of the report and ensure that they are 
getting better. 
○ Within this dialog: Tools -> Apply Misalignment. Save this report as well. 
○ This will change the extrinsic values in the camera calibration file (.cal). 
● Open Camera Definition from the TerraPhoto toolbar 
○ The new extrinsic values now appear here. 
○ Tools -> Solve Parameters (re-calculates intrinsic values based on tie points and using new extrinsic). 
○ Solve for all three intrinsics and save the camera definition. 
○ Click ‘Apply’ to apply these new values to the calculations that created the initial image timing list and 
refines the image positions and orientations. 
● Rectify -> Define Color Points to check your calibration through assessing the on-the-fly orthomosaic. 
○ For Mobile Ladybug Systems Assess this calibration quality with the “create view’ tool under “view” 
● Repeat the tie point and intrinsic/extrinsic calibration process until the orthomosaic created through ‘Define Color 
Points’ has acceptable image alignment. 
○ For Mobile Ladybug Systems assess this through the “create view” tool under “view”. 

5.3.3 Color Points 


● In the TerraPhoto toolbar, find the color points toolbox and open it separately. 
20

5.3.3.1 Color Balance 

● Start with an automatic search for color points using the color points main window → file → search points. 
● Then use the color points toolbox to place or delete color points, and overcome any existing radiometric issues 
with the orthomosaic. Save these points when done. 

5.3.3.2 Seamlines 

● Start with an automatic search for selection shapes using the color points main window → image → search 
seamlines. Search between all images. 
● Then use the seam line editing tools on the right side of color points toolbox (either selection shape or brush) to 
edit the seam lines and minimize any parallax issues. 

5.3.3.3 Seamlines for Mobile Applications and RGB extraction 


● For seamline edits to be used in RGB extraction or in Mobile application, the LiDAR file format needs to be 
changed to Fast Binary (.fbi) either in ‘define project’ or in the loaded points in TerraScan. 
● Compute normal vectors for the entire project/file (towards trajectory) 
● Use Tools -> Extract color from Images and ensure image numbers are computed and stored 
● After extraction: Use Rectify -> Define Color points to start the seamline editing process (mode: Point cloud) 
● Use the selection shapes tool(s) as described in the Aerial post processing workflow 
○ After each selection shape is place, manually choose the best image to provide RGB information 
● After finishing selection shapes, use File -> Recompute all (in the Color Points ribbon) 
○ Uncheck ‘image assignments’ and check ‘Point Colors’. 
● Complete any radiometric alteration using ‘Color Points’ in the same process as with aerial projects. 
● Now the image number prioritization applied through the selection shapes process is written into each point 
○ Use ‘extract color from images’ and ensure the image numbers are set to ‘Use Stored’ and to load the 
color points to keep the selection shape seamline and radiometric edits in the RGB extraction process.   

5.3.4 Orthomosaic Output 


To Output an Orthomosaic: ​*Not applicable to Mobile Ladybug Systems* 
● Create an AOI for your mosaic to be clipped to, or use the bin boundaries from your LiDAR project to tile the 
mosaic output. 
● Rectify → Rectify Mosaic 
○ Specify your desired output resolution (pixel size) and use 3*8 bit color  
○ Specify your coordinate system ensure a proper georeference, and generate .tfw/.ecw file. 
○ Use the color points and selection shapes from the seam line and radiometric editing. 

5.3.5 RGB Extraction 


To Extract RGB information into the Point Cloud: 
● In your TerraScan Project Window → Tools → Extract color from Images 
○ Use raw images and specify the color points you saved from earlier steps. 
○ Use closest in 3D for the geometry of colorization, although feel free to try other methods. Project and 
platform type can make other methods preferrable. 
21

○ *Use Closest in Time for Mobile Ladybug Systems to extract RGB information to the point cloud* 
○ Use Depth Maps and a minimal footprint (~1-2 cm). 
 

5.4 Ground Control and Assessing Absolute Accuracies 


If you do not have ground control for a specific project or area, this section should be skipped. Use the relative accuracy 
from the last measure match report as the reportable accuracy, but clearly communicate that this is relative accuracy of line 
to line ground surfaces, not absolute accuracy of the point cloud to a real life surface. 
● Open your GCP file and format it into a tab delimited .txt file without headers and ordered Northing, Easting, Z, 
without any descriptor columns. Ensure the coordinate system used by the GCP’s is the exact system that your 
point cloud is in, otherwise re-project the points. You should also have a subset of withheld GCP’s for a final 
accuracy report. 
● In the TerraScan Project window -> Tools -> Output control report (and save) 

Figure 5.7 Sample Control Report 


● Note the average Dz. This will be your vertical shift applied to the point cloud to shift the data up or down to the 
relative elevation of the GCP’s. This will correct any error applied to the base station height or other raw inputs 
with incorrect elevations. 
● Create a macro to ‘transform points’ by the Dz noted above, then run this macro on the project. 
● Change the project directory to reflect this new set of .las/.laz and re-run the control report against these new 
points with the GCP file of withheld points. 
● Save this report as the final accuracy report. The RMS value is your reportable absolute accuracy, with the 
standard deviation as a good reference to the ‘tightness’ of the accuracy. 
22

6. Mobile Data Processing Tips 


When processing ​Mobile datasets​, a few considerations should be kept in mind. The processing is largely the same, with 
some nuanced differences: 
● Convert .prg streams to panoramic images with the rigorous processing setting and an appropriate stitching 
sphere size. 
● Ensure all of the image events were converted to panoramics, to ensure no corruption of the .prg streams. 
● Walk through all images to check for color issues or potential dropouts or duplicates. 
● Trim image timing list as you see fit related to the above issues. 
● Try using the TerraPhoto tool “Delete Closeby Images” in the ‘images’ dropdown to remove duplicate images from 
stops. 
● Creating depth maps: set the depth resolution to 1mm if processing time allows, otherwise 1-2cm. 
● Compute normal vectors: This aids the RGB extraction process for mobile applications, although not necessary. 

6.0.1 Create Mask for Panoramic Image RGB Extraction 


● To create a mask to exclude the car and antennae from the RGB extraction process: 
○ Open a .dgn that has Easting and Northing origins set to zero in 'Photo Define 
Coordinate Setup'. 
○ Start 'Manage Raster References' from the TerraPhoto toolbar. 
○ Attach one raw image (for example ladybug_panoramic_000001.jpg). 
○ Select this image in the reference list and start 'Edit / Modify 
attachment' menu command. 
○ Click OK. Image gets positioning where lower left corner is at 0,0 and pixel size 
is 1.0 master unit. 
○ Start 'Display / Fit / All' and click in view 1. 
○ Draw polygon for bad image area. 
○ Start 'Define Camera'. 
○ Use 'File / Open' to open ladybug.cal. 
○ Select polygon drawn in step 7. 
○ Start 'Tools / Assign bad polygons'. 
○ Save camera calibration. 

   
23

7. Frequently Asked Questions 


1. How can I contact Phoenix LiDAR Systems for support? 
If you need help, we offer a number of options for customer support. The preferred method of communication for 
support is through our ticketing system. Send an email to s​ upport@phoenixlidar.com​ with the subject line as the 
title of your question and include your question in the body of the email. 
 
2. What is the difference between an ellipsoid and a geoid? 
Ellipsoid comes from the word "ellipse," which is simply a generalization of a circle. Ellipsoids are generalizations 
of spheres. The Earth is not a true sphere, it is an ellipsoid, as Earth is slightly wider than it is tall. Although other 
models exist, the ellipsoid is the best fit to Earth's true shape. 
 
Like the ellipsoid, the geoid is a model of the Earth's surface. According to the University of Oklahoma, "the geoid 
is a representation of the surface of the earth that it would assume, if the sea covered the earth." This 
representation is also called the "surface of equal gravitational potential," and essentially represents the "mean 
sea level." The geoid model is not an exact representation of sea level surface. Dynamic effects, such as waves 
and tides, are excluded in the geoid model. 
 
Unlike the geoid, the ellipsoid assumes that Earth's surface is smooth. Additionally, it assumes that the planet is 
completely homogeneous. If this were true, Earth could have no mountains or trenches. Further, the mean sea 
level would coincide with the ellipsoid surface. This is not true, however. Vertical distance exists between the 
geoid and the ellipsoid as a result of the geoid taking into account mountains and trenches as an Earth model. 
This difference is known as the "geoid height." The differences between the ellipsoid and geoid can be significant, 
as the ellipsoid is merely a baseline for measuring topographic elevation. It assumes that the Earth's surface is 
smooth, where the geoid does not. 
3. What is the difference between Antenna Reference Point (ARP) and Antenna Phase Center? 
All GNSS measurements are referred to the antenna phase center, where the antenna phase center is defined as the 
apparent source of radiation. The position of the antenna phase center is not necessarily the geometric center of 
the antenna. The actual antenna phase centers for L1 and L2 frequencies are points in space, and cannot be 
measured physically.  
 
Since the Antenna Phase Center cannot be measured physically, the Antenna Reference Point (ARP) is used as a 
physical measurement point, and the known Phase Center Offset (PCO) is applied to the ARP measurement to yield 
the true Phase Center measurement. The ARP on a GNSS antenna is typically the point on the centerline of the 
antenna at the mounting surface. 
24

This content is subject to change. 


Download the latest version from ​www.phoenixlidar.com 
 
If you have any questions about this document, please contact Phoenix LiDAR Systems by sending a message to 
support@phoenixlidar.com​. 
 
Copyright © 2018 Phoenix LiDAR Systems  
All Rights Reserved. 

You might also like