ImportError: cannot import name 'GpuBindingManager' from 'onnxruntime.transformers.io_binding_helper'
nguyenthekhoig7 opened this issue · comments
Describe the issue
When running
python3 demo_txt2img.py --help
I encountered this error:
ImportError: cannot import name 'GpuBindingManager' from 'onnxruntime.transformers.io_binding_helper' (/home/user/anaconda3/envs/env_tmp_onnx_image/lib/python3.9/site-packages/onnxruntime/transformers/io_binding_helper.py)
I inspected it and found out: in the python3.9/site-packages/onnxruntime/transformers/io_binding_helper.py
file, there is no phrase 'GpuBindingManager'. Perform Ctrl+F return no result.
My system setup (conda environment):
- Python version: 3.9
- onnxruntime version: 1.17.3
I believe this might be importing from an incorrect file. I appreciate every help and suggestion.
To reproduce
- Create env and clone
conda create -n env_onnx python==3.9
git clone https://github.com/microsoft/onnxruntime.git
- Install dependencies
cd onnxruntime/onnxruntime/python/tools/transformers/models/stable_diffusion/
python3 -m pip install -r requirements-cuda12.txt
python3 -m pip install --upgrade polygraphy onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
- Run the demo file
python3 demo_txt2img.py --help
At this step I received the error ModuleNotFoundError: No module named 'onnxruntime'
, I installed it by pip install -U onnxruntime
.
Then I re-run
python3 demo_txt2img.py --help
and encountered the above error
Urgency
Demo file is not runnable, I have no clue how to fix this without internal insight.
Platform
Linux
OS Version
Ubuntu22.04
ONNX Runtime Installation
Pip Install
ONNX Runtime Version or Commit ID
1.17.3
ONNX Runtime API
Python
Architecture
X64
Execution Provider
CUDA
Execution Provider Library Version
CUDA 12.4
You can git checkout rel-1.17.3
after git clone https://github.com/microsoft/onnxruntime.git
to get the demo script for 1.17.3.
To run the demo in main branch, please follow the document to run in in docker (require the step of build onnxruntime from source):
https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md#run-demo-with-docker
Or wait for 1.18 release that will be ready in about two weeks.
Hi, thanks for your response.
I tried building from source by first cloning the repo, there was the GpuBindingManager
in the code after cloning.
Then I ran
./build.sh --build-wheel
following the installing instruction, but encountered
RuntimeError: CUDA error: out of memory
Can I build wheel using my CPU RAM instead, and how can I do that? If further information is needed, please tell me.
Again, I appreciate your promptly support
build wheel use CPU RAM. The error looks like during testing. You can skip tests if you follow the step I shared in the above link.
export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_version 12.2 \
--cuda_home /usr/local/cuda-12.2 --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
--cmake_extra_defines onnxruntime_BUILD_UNIT_TESTS=OFF \
--allow_running_as_root
python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*.whl --force-reinstall
It worked(after I change 12.2 to my cuda version, and set some lib path in the enviromnent).
Thank you very much @tianleiwu. Hope you have a good day!