Now with SDXL support.
- Ubuntu 22.04 LTS
- CUDA 11.8
- Python 3.10.12
- Torch 2.0.1
- xformers 0.0.22
- Jupyter Lab
- Automatic1111 Stable Diffusion Web UI 1.7.0
- Dreambooth extension 1.0.14
- ControlNet extension v1.1.441
- After Detailer extension v24.3.0
- Locon extension
- ReActor extension (replaces roop)
- Inpaint Anything extension
- Infinite Image Browsing extension
- CivitAI extension
- CivitAI Browser+ extension
- Kohya_ss v22.6.2
- ComfyUI
- ComfyUI Manager
- sd_xl_base_1.0.safetensors
- sd_xl_refiner_1.0.safetensors
- sdxl_vae.safetensors
- inswapper_128.onnx
- runpodctl
- OhMyRunPod
- RunPod File Uploader
- croc
- rclone
- Application Manager
This image is designed to work on RunPod. You can use my custom RunPod template to launch it on RunPod.
In order to cache the models, you will need at least 32GB of CPU/system
memory (not VRAM) due to the large size of the models. If you have less
than 32GB of system memory, you can comment out or remove the code in the
Dockerfile
that caches the models.
# Clone the repo
git clone https://github.com/ashleykleynhans/stable-diffusion-docker.git
# Download the models
cd stable-diffusion-docker
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors
wget https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl_vae.safetensors
# Build and tag the image
docker build -t username/image-name:1.0.0 .
# Log in to Docker Hub
docker login
# Push the image to Docker Hub
docker push username/image-name:1.0.0
docker run -d \
--gpus all \
-v /workspace \
-p 3000:3001 \
-p 3010:3011 \
-p 3020:3021 \
-p 6006:6066 \
-p 8000:8000 \
-p 8888:8888 \
-p 2999:2999 \
-e JUPYTER_PASSWORD=Jup1t3R! \
-e ENABLE_TENSORBOARD=1 \
ashleykza/stable-diffusion-webui:latest
You can obviously substitute the image name and tag with your own.
Connect Port | Internal Port | Description |
---|---|---|
3000 | 3001 | A1111 Stable Diffusion Web UI |
3010 | 3011 | Kohya_ss |
3020 | 3021 | ComfyUI |
6006 | 6066 | Tensorboard |
8000 | 8000 | Application Manager |
8888 | 8888 | Jupyter Lab |
2999 | 2999 | RunPod File Uploader |
Variable | Description | Default |
---|---|---|
VENV_PATH | Set the path for the Python venv for the app | /workspace/venvs/stable-diffusion-webui |
DISABLE_AUTOLAUNCH | Disable Web UIs from launching automatically | enabled |
ENABLE_TENSORBOARD | Enables Tensorboard on port 6006 | enabled |
Stable Diffusion Web UI, Kohya SS, and ComfyUI each create log files, and you can tail the log files instead of killing the services to view the logs
Application | Log file |
---|---|
Stable Diffusion Web UI | /workspace/logs/webui.log |
Kohya SS | /workspace/logs/kohya_ss.log |
ComfyUI | /workspace/logs/comfyui.log |
Pull requests and issues on GitHub are welcome. Bug fixes and new features are encouraged.
You can contact me and get help with deploying your container to RunPod on the RunPod Discord Server below, my username is ashleyk.