-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
Hi,
I've spent a greater part of the last ten days trying to get the Kosmos-2.5 model working on my Windows 11 PC, relevant specs below:
Intel Core i9 13900KF
Nvidia RTX 3090FE
32GB DDR5 5600MT/s (16x2)
Windows 11 - OS Build 22631.3737
Python 3.11.5 & PIP 24.1.1
CUDA 12.4
Flash-Attention-2 (v2.5.9.post1)
This proved ridiculously impossible despite following the elaborate (/s) steps mentioned in the Kosmos-2.5 repo, and I ran around in circles trying to fix this. Turns out this model is at the moment EXTREMELY temperamental to the software environment and Python v3.11 causes many, many issues, and one must stick to v3.10.x.
Devs, I REALLY wish you'd mentioned this in the Kosmos repo! Since PyTorch & FlashAttention2 have no issues with v3.11, I didn't think Kosmos would either given it's not mentioned anywhere!
Turns out, sticking to the default v3.10.12 of WSL-Ubuntu works, but figuring this out was quite the journey. Sharing it below as well as all the steps that worked in case it may help someone facing the same issues.
Amongst the many errors I faced were (DO NOT TRY ANY OF THE RESOLUTIONS IN THIS SECTION, THEY'RE SHARED FOR REFERENCE ONLY. THE SOLUTION IS IN THE SECTION THAT FOLLOWS THIS ONE):
- Error:
ImportError: cannot import name II form omegaconf
Resolutions tried :
- Played around with OmegaConf & Hydra-core versions as per this thread
- Recent update to requirements.txt mitigates this by specifying precise OmegaConf version
- Error:
ValueError: mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory
Resolutions tried:
- Error:
omegaconf.errors.ConfigAttributeError: Missing key seed full_key: common.seed object_type=dict
Resolution tried:
- Requested assistance from Claude 3.5 Sonnet, added random
'seed': 42to the init args of inference.py as per advise
- Error:
ValueError: Default process group has not been initialized, please make sure to call init_process_group.
Resolution tried:
- Requested assistance from Claude 3.5 Sonnet, made the following changes:
a. To inference.py, added import torch.distributed as dist to imports and
if not dist.is_initialized():
dist.init_process_group(backend='gloo', init_method='env://', rank=0, world_size=1)
torch.cuda.set_device(0)
to init() before use_cuda = True
b. To gpt.py, added the below to the build_model method of the GPTmodel class:
if hasattr(distributed_utils, 'get_data_parallel_rank')
args.ddp_rank = distributed_utils.get_data_parallel_rank()
else:
args.ddp_rank = 0 4
c. Ran with environment variables:
$env:MASTER_ADDR = "localhost"
$env:MASTER_PORT = "12355"
...which then lead to:
- Error:
RuntimeError: use_libuv was requested but PyTorch was build without libuv support
Resolution tried:
- Ran with environment variable:
$env:USE_LIBUV = "0"
- Error:
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not NoneType
This then led to a host of modifications to the .py files which led to messes best forgotten. So anyways...
TURNS OUT THE ISSUE WAS THE PYTHON VERSION 3.11.x ALL ALONG! PLEASE STICK TO 3.10.x!
SHARING MY WORKING WINDOWS 11 WSL SETUP BELOW:
Make sure Nvidia GPU drivers & CUDA (I used v12.4) are installed in the host Windows 11 system
- Via PowerShell:
- Ensure you have WSL version 2 by running:
wsl -v
# or
wsl --status
Update if not
- Install Ubuntu:
wsl --install -d Ubuntu-22.04
# after installation & setup completes:
wsl --set-default Ubuntu-22.04
-
Now open a WSL-terminal by typing
wslin the Start Menu or a Command Prompt, or by searching forUbuntuin the Start Menu -
Install CUDA Toolkit v12.4.1:
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.4.1/local_installers/cuda-repo-wsl-ubuntu-12-4-local_12.4.1-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-12-4-local_12.4.1-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-12-4-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-4
- Set NVCC PATH:
- Confirm symlink for cuda:
ls -l /usr/local/cuda
ls -l /etc/alternatives/cuda
- Update bashrc:
nano ~/.bashrc
# add this line to the end of bashrc:
export PATH=/usr/local/cuda/bin:$PATH
- Reload bashrc:
source ~/.bashrc
- Confirm CUDA installation:
nvcc -V
nvidia-smi
- Install flash-attention:
- Install PyTorch:
sudo apt install python3-pip
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124
- Install dependencies:
pip install wheel==0.37.1
pip install ninja==1.11.1
pip install packaging==24.1
pip install numpy==1.22
pip install psutil==6.0.0
git cloneandcdrepo:
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
- Install from repo:
pip install . --no-build-isolation
- Test flash-attention installation (example output:
2.5.9.post1):
python3
import flash_attn
print(flash_attn.__version__)
- Install Kosmos-2.5!
- PIP Requirements:
pip install tiktoken
pip install tqdm
pip install "omegaconf<=2.1.0"
pip install boto3
pip install iopath
pip install "fairscale==0.4"
pip install "scipy==1.10"
pip install triton
pip install git+https://github.com/facebookresearch/xformers.git@04de99bb28aa6de8d48fab3cdbbc9e3874c994b8
pip install git+https://github.com/Dod-o/kosmos2.5_tools.git@fairseq
pip install git+https://github.com/Dod-o/kosmos2.5_tools.git@infinibatch
pip install git+https://github.com/Dod-o/kosmos2.5_tools.git@torchscale
pip install git+https://github.com/Dod-o/kosmos2.5_tools.git@transformers
- Clone Repo and Checkpoint:
git clone https://github.com/microsoft/unilm.git
cd unilm/kosmos-2.5/
wget https://huggingface.co/microsoft/kosmos-2.5/resolve/main/ckpt.pt
- Run OCR!
python3 inference.py --do_ocr --image assets/example/in.png -- ckpt ckpt.pt
python3 inference.py --do_md --image assets/example/in.png -- ckpt ckpt.pt
- (Optional) GUI for WSL - Very Helpful
sudo apt update
sudo apt upgrade
sudo apt install lxde
DISPLAY=:0 startlxde