SDK Installation
This guide covers installing the Holoscan SDK development stack for NVIDIA Developer Kits (arm64) and x86_64 Linux platforms.
For production deployments on NVIDIA Developer Kits like IGX Orin, consider the deployment stack based on OpenEmbedded/Yocto. This provides a minimal runtime optimized for memory, speed, security, and power to run your Holoscan application. The runtime Board Support Package (BSP) can be optimized with respect to memory usage, speed, security and power requirements.
Setup your developer kit:
Developer Kit | User Guide | OS | GPU Mode |
---|---|---|---|
NVIDIA Jetson AGX Thor | Guide | Jetpack 7.0 | dGPU |
NVIDIA IGX Orin | Guide | IGX Software 1.1.1 Production Release | iGPU or* dGPU |
NVIDIA Jetson AGX Orin and Orin Nano | Guide | JetPack 6.2.1 | iGPU |
NVIDIA Clara AGX Only supporting the NGC container |
Guide | HoloPack 1.2 Upgrade to 535+ drivers required |
dGPU |
* iGPU and dGPU can be used concurrently on a single developer kit in dGPU mode. See details here.
This version of the Holoscan SDK has been tested on the following Superchips:
SuperChip | Tested OS | Display Support |
---|---|---|
DGX Spark (GB10) | NVIDIA DGX OS (Ubuntu 24.04) | Yes |
Grace-Hopper (GH200) | Ubuntu Server 22.04¹ | No² (headless only) |
¹ Ubuntu installation guide for Grace systems
² SBSA/SuperChips don’t support display output. Use HoloViz for headless rendering.
Supported x86_64 distributions:
OS | NGC Container | Debian/RPM package | Python wheel | Conda package | Build from source |
---|---|---|---|---|---|
Ubuntu 22.04 | Yes | Yes | Yes | Yes | Yes |
Ubuntu 24.04 | Yes | Yes | Yes | Yes | Yes |
RHEL 9.x | Yes | No | No | No | No¹ |
Other Linux distros | No² | No | No³ | No | No¹ |
¹ Not formally tested or supported, but expected to work if building bare metal with the adequate dependencies.
² Not formally tested or supported, but expected to work if supported by the NVIDIA container-toolkit.
³ Not formally tested or supported, but expected to work if the glibc version of the distribution is 2.35 or above.
NVIDIA discrete GPU (dGPU) Requirements:
GPU Architecture: Ampere or newer (recommended for best performance)
GPUDirect RDMA: Requires Quadro/NVIDIA RTX series
Tested with NVIDIA RTX A6000 and NVIDIA RTX ADA 6000
Drivers: NVIDIA dGPU drivers 535 or newer
x86 workstations: Tested with OpenRM drivers R550+
CUDA Green Contexts: Requires drivers 560+ (optional feature)
Additional Prerequisites:
RDMA Support: See Enabling RDMA guide
Software Dependencies: Vary by installation method (see below)
Additional Setup: See Additional Setup and Third-Party Hardware Setup
We provide multiple ways to install and run the Holoscan SDK:
Installation Methods
CUDA 13 (x86_64, Jetson Thor, DGX Spark)
docker pull nvcr.io/nvidia/clara-holoscan/holoscan:v3.7.0-cuda13
CUDA 12 dGPU (x86_64, IGX Orin dGPU, Clara AGX dGPU, GH200)
docker pull nvcr.io/nvidia/clara-holoscan/holoscan:v3.7.0-cuda12-dgpu
CUDA 12 iGPU (Jetson Orin, IGX Orin iGPU, Clara AGX iGPU)
docker pull nvcr.io/nvidia/clara-holoscan/holoscan:v3.7.0-cuda12-igpu
See details and usage instructions on NGC.
Install via APT package manager:
sudo apt update
CUDA 13
x86_64, GB200, DGX Spark
sudo apt install holoscan-cuda-13
Jetson Thor
sudo apt install holoscan
CUDA 12
x86_64, GH200
sudo apt install holoscan-cuda-12
IGX Orin, Jetson Orin
sudo apt install holoscan
Torch and ONNXRuntime backends require manual installation. Add --install-suggests
flag to install transitive dependencies, then see the support matrix below for installation links.
Troubleshooting
Package not found: E: Unable to locate package holoscan
Platform-specific solutions:
IGX Orin:
Verify compute stack installation (configures L4T repository)
If still failing, use
arm64-sbsa
from the CUDA repository.
Jetson:
Verify JetPack installation (configures L4T repository)
If still failing, use
aarch64-jetson
from the CUDA repository.
GH200: Use
arm64-sbsa
from the CUDA repository.x86_64: Use
x86_64
from the CUDA repository.
Missing CUDA libraries at runtime:
ImportError: libcudart.so.12: cannot open shared object file: No such file or directory
This occurs when multiple CUDA Toolkit versions are installed. To fix:
Find the library:
find /usr/local/cuda* -name libcudart.so.12
Select correct version:
sudo update-alternatives --config cuda
If library not found, reinstall CUDA Toolkit:
sudo apt update && sudo apt install -y cuda-toolkit-12-6
Missing CUDA headers at compile time:
the link interface contains: CUDA::nppidei but the target was not found. [...] fatal error: npp.h: No such file or directory
Same root cause as above (mixed CUDA versions). To fix:
Find the header:
find /usr/local/cuda-* -name npp.h
Follow the same
update-alternatives
steps above
Missing TensorRT libraries at runtime:
Error: libnvinfer.so.8: cannot open shared object file: No such file or directory
Wrong TensorRT major version installed. Reinstall TensorRT 8:
sudo apt update && sudo apt install -y libnvinfer-bin="8.6.*"
Cannot import holoscan Python module:
ModuleNotFoundError: No module named 'holoscan'
Python support removed from Debian package in v3.0.0. Install the Python wheel instead.
Install via pip
:
CUDA 13 (x86_64, Jetson Thor, DGX Spark)
pip install holoscan-cu13
CUDA 12 (x86_64, IGX Orin, Clara AGX, GH200, Jetson Orin)
pip install holoscan-cu12
See PyPI for details and troubleshooting.
x86_64 users: Ensure CUDA Toolkit is installed first.
Install via conda
:
conda install holoscan cuda-version=12.6 -c conda-forge
CUDA 12.x only - CUDA 13 support not yet available.
See conda-forge for details and troubleshooting.
Not sure what to choose?
The Holoscan container image on NGC it the safest way to ensure all the dependencies are present with the expected versions (including Torch and ONNX Runtime), and should work on most Linux distributions. It is the simplest way to run the embedded examples, while still allowing you to create your own C++ and Python Holoscan application on top of it. These benefits come at a cost:
large image size from the numerous (some of them optional) dependencies. If you need a lean runtime image, see section below.
standard inconvenience that exist when using Docker, such as more complex run instructions for proper configuration.
If you are confident in your ability to manage dependencies on your own in your host environment, the Holoscan Debian package should provide all the capabilities needed to use the Holoscan SDK, assuming you are on Ubuntu 22.04 or Ubuntu 24.04.
If you are not interested in the C++ API but just need to work in Python, you can use the Holoscan python wheels on PyPI. While they are the easiest solution to install the SDK, it might require the most work to setup your environment with extra dependencies based on your needs. Finally, they are only formally supported on Ubuntu 22.04 and Ubuntu 24.04, though should support other linux distributions with glibc 2.35 or above.
If you are developing in both C++ and Python, the Holoscan Conda package should provide all capabilities needed to use the Holoscan SDK.
NGC dev Container | Debian Package | Python Wheels | |
---|---|---|---|
Runtime libraries | Included | Included | Included |
Python module | 3.12 | N/A | 3.10 to 3.13 |
C++ headers and CMake config |
Included | Included | N/A |
Examples (+ source) | Included | Included | retrieve from GitHub |
Sample datasets | Included | retrieve from NGC |
retrieve from NGC |
CUDA runtime 1 | Included | automatically 2 installed |
require manual installation |
NPP support 3 | Included | automatically 2 installed |
require manual installation |
TensorRT support 4 | Included | automatically 2 installed |
require manual installation |
Vulkan support 5 | Included | automatically 2 installed |
require manual installation |
V4L2 support 6 | Included | automatically 2 installed |
require manual installation |
Torch support 7 | Included | require manual 8 installation |
require manual 8 installation |
ONNX Runtime support 9 | Included | require manual 10 installation |
require manual 10 installation |
ConnectX support 11 | User space included Install kernel drivers on the host |
require manual installation |
require manual installation |
CLI support | require manual installation | require manual installation | require manual installation |
Need more control over the SDK?
The Holoscan SDK source repository is open-source and provides reference implementations, as well as infrastructure, for building the SDK yourself.
We only recommend building the SDK from source if you need to build it with debug symbols or other options not used as part of the published packages. If you want to write your own operator or application, you can use the SDK as a dependency (and contribute to HoloHub). If you need to make other modifications to the SDK, file a feature or bug request.
CUDA 12 is required. Already installed on NVIDIA developer kits with IGX Software and JetPack.
Debian installation on x86_64 requires the latest cuda-keyring package to automatically install all dependencies.
NPP 12 needed for the FormatConverter and BayerDemosaic operators. Already installed on NVIDIA developer kits with IGX Software and JetPack.
TensorRT 10.3+ needed for the Inference operator. Already installed on NVIDIA developer kits with IGX Software and JetPack.
Vulkan 1.3.204+ loader needed for the HoloViz operator (+ libegl1 for headless rendering). Already installed on NVIDIA developer kits with IGX Software and JetPack.
V4L2 1.22+ needed for the V4L2 operator. Already installed on NVIDIA developer kits with IGX Software and JetPack. V4L2 also requires libjpeg.
Torch support tested with LibTorch 2.8.0, OpenBLAS 0.3.20+ (aarch64 iGPU only), NVIDIA Performance Libraries (aarch64 dGPU only).
To install LibTorch on baremetal, either build it from source, from the python wheel, or extract it from the holoscan container (in /opt/libtorch/
). See instructions in the Inference section.
Tested with ONNXRuntime 1.22.0. Note that ONNX models are also supported through the TensorRT backend of the Inference Operator.
To install ONNXRuntime on baremetal, either build it from source, download our pre-built package with CUDA 12 and TensorRT execution provider support, or extract it from the holoscan container (in /opt/onnxruntime/
).
Tested with DOCA 3.0.0.