[BUG] PR #251 makes torch a requirement causing installs to fail
matroscoe opened this issue · comments
Memgraph version gqlalchemy >=1.13.1,<2.0.0
Environment
Python 3.11.1
Ubuntu 22.04
Running in Docker
Describe the bug A clear and concise description of what the bug is.
To Reproduce Steps to reproduce the behavior:
- have a
pyproject.toml
and requiregqlalchemy >=1.13.1,<2.0.0
Expected behavior
- Library continues to install and run as expected
- If torch functionality is desired there should be a method for pointing to smaller binary files (if this is caused by an out of space issue)
Additional context The torch requirement is too strict forcing nvidia-cuda-*
packages to install which fail when there is no valid GPU available.
There is a slight chance this could be a system space issue but if that is the case it is because the nvidia-cuda-*
packages are requiring more than 1.5GB of space during install
@matroscoe Sorry for issues during install and thanks for reporting an issue. We got a similar request on optional dependencies installation. #227
We will look into optional dependency install, which is directly related to this.