chavinlo / sda-node

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OS Support note

78Alpha opened this issue · comments

May need a note at the top that states it is Linux only As-Is. It seems the web part relies on fnctl, of which windows cannot use that.

It could be added that WSL2 may be an option but I am unsure as to whether it will need Windows 10 WSL (has lower memory limit) or Windows 11 WSL (No limit) as a minimum.

Related, someone got it to run on Windows: AUTOMATIC1111/stable-diffusion-webui#7345

I haven't tried running sda-node on Windows yet (lazy), but I've already run chaiNNer with TensorRT support on Windows.
I just downloaded the archive (TensorRT-8.4.3.1.Windows10.x86_64.cuda-11.6.cudnn8.4.zip) from the link https://developer.nvidia.com/nvidia-tensorrt-download (registration required)
Extracted the archive to C:\TensorRT-8.4.3.1
Add to PATH (environment variables): C:\TensorRT-8.4.3.1\lib;C:\TensorRT-8.4.3.1\bin
Launched chaiNNer and enabled support. That's all (except that before that I already installed the CUDA).

https://github.com/chaiNNer-org/chaiNNer/search?q=TensorRT

Maybe this information will help you.

I have gotten enhancr to run with TRT as well, however, any implementation of Stable Diffusion with TRT has run into issues with the model loading up, printing the model instead of converting to ONNX, and then quietly failing. It's a conversion failure rather than an inference failure.

VoltaML/voltaML-fast-stable-diffusion#26 is the issue I brought up to VoltaML, but it does happen with Nvidia's implementation as well. It can get to loading a model already in TRT format, but ran into the Polygraphy error because their models are compiled for 40 series only.

throwing in my experience, really easy to make this run, only headache was getting nvtx to compile and to convert some models (since each model is gpu && tensorrt specific)