Note
We plan to release TensorRT accelerated implementation and adapting more matching networks for MAC-VO. If you are interested, please star β this repo to stay tuned.
Note
We provide documentation for extending MAC-VO for extending MAC-VO or using this repository as a boilerplate for your learning-based Visual Odometry.
-
[Jun 2025] We release the MAC-VO Fast Mode - with faster pose graph optimization and mixed-precision inference, we achieve 2x speedup compared to previous version and reach speed of 12.5fps on 480x640 images.
See
Config/Experiment/MACVO/MACVO_Fast.yaml
for detail.Original example is also boosted from 5fps to 7fps and the config file is moved to
MACVO_Performant.yaml
. -
[Apr 2025] Our work was nominated as the ICRA 2025 Best Paper Award Finalist (top 1%)! Keep an eye on our presentation on May 20, 16:35-16:40 Room 302. We also plan to provide a real-world demo at the conference.
-
[Mar 2025] We boost the performance of MAC-VO with a new backend optimizer, the MAC-VO now also supports dense mapping without any additional computation.
-
[Jan 2025] Our work is accepted by the IEEE International Conference on Robotics and Automation (ICRA) 2025. We will present our work at ICRA 2025 in Atlanta, Georgia, USA.
-
[Nov 2024] We released the ROS-2 integration at https://github.com/MAC-VO/MAC-VO-ROS2 along with the documentation at https://mac-vo.github.io/wiki/ROS/
Clone the repository using the following command to include all submodules automatically.
git clone https://github.com/MAC-VO/MAC-VO.git --recursive
Component | Minimum Version | Notes |
---|---|---|
CUDA Runtime | β₯ 12.4 | Dockerfile installs correct version |
Python | β₯ 3.10 | |
VRAM | β₯ 6 GB | 640Γ480; fast mode (mixed precision) needs 2.7GB |
-
Docker Image
$ docker build --network=host -t macvo:latest -f Docker/Dockerfile .
-
Virtual Environment
You can setup the dependencies in your native system. MAC-VO codebase can only run on Python 3.10+. See
requirements.txt
for environment requirements.How to adapt MAC-VO codebase to Python < 3.10?
The Python version requirement we required is mostly due to the
match
syntax used and the type annotations.The
match
syntax can be easily replaced withif ... elif ... else
while the type annotations can be simply removed as it does not interfere runtime behavior.
All pretrained models for MAC-VO, stereo TartanVO and DPVO are in our release page. Please create a new folder Model
in the root directory and put the pretrained models in the folder.
$ mkdir Model
$ wget -O Model/MACVO_FrontendCov.pth https://github.com/MAC-VO/MAC-VO/releases/download/model/MACVO_FrontendCov.pth
$ wget -O Model/MACVO_posenet.pkl https://github.com/MAC-VO/MAC-VO/releases/download/model/MACVO_posenet.pkl
Test MAC-VO immediately using the provided demo sequence. The demo sequence is a selected from the TartanAir v2 dataset.
- Download a demo sequence through Google Drive.
- Download pre-trained model for frontend model and posenet.
To run the Docker:
$ docker run --gpus all -it --rm -v [DATA_PATH]:/data -v [CODE_PATH]:/home/macvo/workspace macvo:latest
To run the Docker with visualization:
$ xhost +local:docker; docker run --gpus all -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v [DATA_PATH]:/data -v [CODE_PATH]:/home/macvo/workspace macvo:latest
We will use Config/Experiment/MACVO/MACVO_example.yaml
as the configuration file for MAC-VO.
-
Change the
root
in the data config file 'Config/Sequence/TartanAir_example.yaml' to reflect the actual path to the demo sequence downloaded. -
Run with one of the following command:
Performant Mode - best performance with moderate speed (7.5fps on 480x640 image)
$ cd workspace $ python3 MACVO.py --odom Config/Experiment/MACVO/MACVO_Performant.yaml --data Config/Sequence/TartanAir_example.yaml
Fast Mode - slightly degraded performance (<5% increase in RTE and ROE) with most speed (12.5fps on 480x640 image)
$ cd workspace $ python3 MACVO.py --odom Config/Experiment/MACVO/MACVO_Fast.yaml --data Config/Sequence/TartanAir_example.yaml
Note
See python MACVO.py --help
for more flags and configurations.
The demo sequence is RGBβonly. If your dataset includes depth.npy and/or flow.npy, set both flags to true.
Every run will produce a Sandbox
(or Space
). A Sandbox
is a storage unit that contains all the results and meta-information of an experiment. The evaluation and plotting script usually requires one or more paths of sandbox(es).
Calculate the absolute translate error (ATE, m); relative translation error (RTE, m/frame); relative orientation error (ROE, deg/frame); relative pose error (per frame on se(3)).
$ python -m Evaluation.EvalSeq --spaces SPACE_0, [SPACE, ...]
Plot sequences, translation, translation error, rotation and rotation error.
$ python -m Evaluation.PlotSeq --spaces SPACE_0, [SPACE, ...]
-
Run MAC-VO (Ours method) on a Single Sequence
$ python MACVO.py --odom ./Config/Experiment/MACVO/MACVO.yaml --data ./Config/Sequence/TartanAir_abandonfac_001.yaml
-
Run MAC-VO for Ablation Studies
$ python MACVO.py --odom ./Config/Experiment/MACVO/Ablation_Study/[CHOOSE_ONE_CFG].yaml --data ./Config/Sequence/TartanAir_abandonfac_001.yaml
-
Run MAC-VO on Test Dataset
$ python -m Scripts.Experiment.Experiment_MACVO --odom [PATH_TO_ODOM_CONFIG]
-
Run MAC-VO Mapping Mode
$ python MACVO.py --odom ./Config/Experiment/MACVO/MACVO_MappingMode.yaml --data ./Config/Sequence/TartanAir_abandonfac_001.yaml
We used the Rerun visualizer to visualize 3D space including camera pose, point cloud and trajectory.
-
On Machine with GUI
-
Run
MACVO.py
with the following command line$ python MACVO.py --useRR --odom [ODOM_CONFIG] --data [DATA_CONFIG]
A rerun visualizer should pop up with the trajectory and per-frame point cloud & tracking features visualized.
-
To accumulate the point cloud for dense mapping visualization, please follow the instruction here: #4 (comment)
-
-
On Headless Machine
- Install the
rerun_sdk
python package on both your machine (with GUI) and remote headless environment. Also setup a port forwarding from remote port9877
to your local machine port9877
. - Start a rerun server by rerun --serve & on the headless machine
- On your machine (with GUI), run rerun ws://localhost:9877 to connect to the remote visualization server. You should see "2 sources connected" on the top right corner of visualizer if everything works smoothly.
- On the headless machine, run
$ python MACVO.py --useRR --odom [ODOM_CONFIG] --data [DATA_CONFIG]
- To accumulate the point cloud for dense mapping visualization, please follow the instruction here: #4 (comment)
- Install the
We also integrated two baseline methods (DPVO, TartanVO Stereo) into the codebase for evaluation, visualization and comparison.
Expand All (2 commands)
-
Run DPVO on Test Dataset
$ python -m Scripts.Experiment.Experiment_DPVO --odom ./Config/Experiment/Baseline/DPVO/DPVO.yaml
-
Run TartanVO (Stereo) on Test Dataset
$ python -m Scripts.Experiment.Experiment_TartanVO --odom ./Config/Experiment/Baseline/TartanVO/TartanVOStereo.yaml
PyTorch Tensor Data - All images are stored in BxCxHxW
format following the convention. Batch dimension is always the first dimension of tensor.
Pixels on Camera Plane - All pixel coordinates are stored in uv
format following the OpenCV convention, where the direction of uv are "east-down". Note that this requires us to access PyTorch tensor in data[..., v, u]
indexing.
World Coordinate - NED
convention, +x -> North
, +y -> East
, +z -> Down
with the first frame being world origin having identity SE3 pose.
This codebase is designed with modularization in mind so it's easy to modify, replace, and re-configure modules of MAC-VO. One can easily use or replase the provided modules like flow estimator, depth estimator, keypoint selector, etc. to create a new visual odometry.
We welcome everyone to extend and redevelop the MAC-VO. For documentation please visit the Documentation Site
To test MAC-VO on your custom data format, you use GeneralStereo
dataloader class in DataLoader/Dataset/GeneralStereo.py
as a starting point.
This dataloader class corresponds to the Config/Sequence/Example_GeneralStereo.yaml
configuration file, where you can manually set the camera intrinsic and stereo basline etc.
@inproceedings{qiu2025mac,
title={MAC-VO: Metrics-Aware Covariance for Learning-Based Stereo Visual Odometry mac-vo. github. io},
author={Qiu, Yuheng and Chen, Yutian and Zhang, Zihao and Wang, Wenshan and Scherer, Sebastian},
booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
pages={3803--3814},
year={2025},
organization={IEEE}
}