-
Notifications
You must be signed in to change notification settings - Fork 195
Description
Before I get to what I ran into let me say that this is going to be an awesome framework! Congrats and thank-you. I was able to install isaac_ros_common and additional docker container on my linux workstation and found it quite straightforward and had no issues.
However, when I repeated the process on my agx orin (latest software), the base isaac_ros_common container does not seem to have access to the gpu/cuda.
For example:
admin@agx-orin:/workspaces/isaac_ros-dev$ python3
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
import torch
print(torch.version)
1.12.0
torch.cuda.is_available()
False
I have another docker container that was previously built using nvcr.io/nvidia/l4t-pytorch:r34.1.0-pth1.12-py3, and cuda and the gpu are available, and I didn't notice any significant difference in arguments as run_dev.sh, which suggests that everything to support using the gpu in a container is set up correctly:
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
import torch
print(torch.version)
1.12.0a0+2c916ef.nv22.3
torch.cuda.is_available()
True
So I am at a loss to understand what the issue is. Here are things that I have checked:
- All software on the orin is up to date
- I followed the set-up instructions so the nvidia container toolkit etc. are correct and the correct versions.
If there is additional information, I can provide, please let me know. Thanks for your help.
bb