MartinRGB / sd-scripts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This repository contains training, generation and utility scripts for Stable Diffusion.

Change History is moved to the bottom of the page. 更新履歴はページ末尾に移しました。

日本語版README

For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!

This repository contains the scripts for:

  • DreamBooth training, including U-Net and Text Encoder
  • Fine-tuning (native training), including U-Net and Text Encoder
  • LoRA training
  • Texutl Inversion training
  • Image generation
  • Model conversion (supports 1.x and 2.x, Stable Diffision ckpt/safetensors and Diffusers)

Stable Diffusion web UI now seems to support LoRA trained by sd-scripts. (SD 1.x based only) Thank you for great work!!!

About requirements.txt

These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)

The scripts are tested with PyTorch 1.12.1 and 1.13.0, Diffusers 0.10.2.

All documents are in Japanese currently.

Windows Required Dependencies

Python 3.10.6 and Git:

Give unrestricted script access to powershell so venv can work:

  • Open an administrator powershell window
  • Type Set-ExecutionPolicy Unrestricted and answer A
  • Close admin powershell window

Windows Installation

Open a regular Powershell terminal and type the following inside:

git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts

python -m venv venv
.\venv\Scripts\activate

pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl

cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py

accelerate config

update: python -m venv venv is seemed to be safer than python -m venv --system-site-packages venv (some user have packages in global python).

Answers to accelerate config:

- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16

note: Some user reports ValueError: fp16 mixed precision requires a GPU is occurred in training. In this case, answer 0 for the 6th question: What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:

(Single GPU with id 0 will be used.)

about PyTorch and xformers

Other versions of PyTorch and xformers seem to have problems with training. If there is no other reason, please install the specified version.

Upgrade

When a new release comes out you can upgrade your repo with the following command:

cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --use-pep517 --upgrade -r requirements.txt

Once the commands have completed successfully you should be ready to use the new version.

Credits

The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!

The LoRA expansion to Conv2d 3x3 was initially released by cloneofsimo and its effectiveness was demonstrated at LoCon by KohakuBlueleaf. Thank you so much KohakuBlueleaf!

License

The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's and LoCon), however portions of the project are available under separate license terms:

Memory Efficient Attention Pytorch: MIT

bitsandbytes: MIT

BLIP: BSD-3-Clause

Change History

  • 1 Apr. 2023, 2023/4/1:
    • Fix an issue that merge_lora.py does not work with the latest version.
    • Fix an issue that merge_lora.py does not merge Conv2d3x3 weights.
    • 最新のバージョンでmerge_lora.py が動作しない不具合を修正しました。
    • merge_lora.pyno module found for LoRA weight: ... と表示され Conv2d3x3 拡張の重みがマージされない不具合を修正しました。
  • 31 Mar. 2023, 2023/3/31:
    • Fix an issue that the VRAM usage temporarily increases when loading a model in train_network.py.
    • Fix an issue that an error occurs when loading a .safetensors model in train_network.py. #354
    • train_network.py でモデル読み込み時にVRAM使用量が一時的に大きくなる不具合を修正しました。
    • train_network.py.safetensors 形式のモデルを読み込むとエラーになる不具合を修正しました。#354
  • 30 Mar. 2023, 2023/3/30:
    • Support P+ training. Thank you jakaline-dev!

      • See #327 for details.
      • Use train_textual_inversion_XTI.py for training. The usage is almost the same as train_textual_inversion.py. However, sample image generation during training is not supported.
      • Use gen_img_diffusers.py for image generation (I think Web UI is not supported). Specify the embedding with --XTI_embeddings option.
    • Reduce RAM usage at startup in train_network.py. #332 Thank you guaneec!

    • Support pre-merge for LoRA in gen_img_diffusers.py. Specify --network_merge option. Note that the --am option of the prompt option is no longer available with this option.

    • P+ の学習に対応しました。jakaline-dev氏に感謝します。

      • 詳細は #327 をご参照ください。
      • 学習には train_textual_inversion_XTI.py を使用します。使用法は train_textual_inversion.py とほぼ同じです。た だし学習中のサンプル生成には対応していません。
      • 画像生成には gen_img_diffusers.py を使用してください(Web UIは対応していないと思われます)。--XTI_embeddings オプションで学習したembeddingを指定してください。
    • train_network.py で起動時のRAM使用量を削減しました。#332 guaneec氏に感謝します。

    • gen_img_diffusers.py でLoRAの事前マージに対応しました。--network_merge オプションを指定してください。なおプロンプトオプションの --am は使用できなくなります。

Sample image generation during training

A prompt file might look like this, for example

# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28

# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40

Lines beginning with # are comments. You can specify options for the generated image with options like --n after the prompt. The following can be used.

  • --n Negative prompt up to the next option.
  • --w Specifies the width of the generated image.
  • --h Specifies the height of the generated image.
  • --d Specifies the seed of the generated image.
  • --l Specifies the CFG scale of the generated image.
  • --s Specifies the number of steps in the generation.

The prompt weighting such as ( ) and [ ] are working.

サンプル画像生成

プロンプトファイルは例えば以下のようになります。

# prompt 1
masterpiece, best quality, (1girl), in white shirts, upper body, looking at viewer, simple background --n low quality, worst quality, bad anatomy,bad composition, poor, low effort --w 768 --h 768 --d 1 --l 7.5 --s 28

# prompt 2
masterpiece, best quality, 1boy, in business suit, standing at street, looking back --n (low quality, worst quality), bad anatomy,bad composition, poor, low effort --w 576 --h 832 --d 2 --l 5.5 --s 40

# で始まる行はコメントになります。--n のように「ハイフン二個+英小文字」の形でオプションを指定できます。以下が使用可能できます。

  • --n Negative prompt up to the next option.
  • --w Specifies the width of the generated image.
  • --h Specifies the height of the generated image.
  • --d Specifies the seed of the generated image.
  • --l Specifies the CFG scale of the generated image.
  • --s Specifies the number of steps in the generation.

( )[ ] などの重みづけも動作します。

Please read Releases for recent updates. 最近の更新情報は Release をご覧ください。

About

License:Apache License 2.0


Languages

Language:Python 100.0%