ucalyptus2 / stable-diffusion-webgpu

A WebGPU port of Stable Diffusion using tinygrad

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Stable Diffusion on tinygrad WebGPU

This is a WebGPU port of Stable Diffusion in tinygrad.
The python code I wrote to compile and export the model can be found here

How it works

The Stable Diffusion model is exported in three parts:

  • textModel
  • diffusor
  • decoder

If you open net.js you can see all the WebGPU kernels that are involved in the inference.
When you open the page for the first time, the model will be downloaded from huggingface in tinygrad's safetensor format. The model is in f16 to optimize download speed.
Since the computation is done in f32 in the model, and since shader-f16 is not yet supported in production Chrome, the model is decompressed to f32 using f16_to_f32.js.
When you open the site a second time, the model will be loaded from the IndexedDB cache into which it was saved on first visit. If you have a full cache hit, the model will be decompressed, compiled and ready to use. If you have a cache miss (usually due to QuotaExceededError), the model will be redownloaded.

License

MIT

About

A WebGPU port of Stable Diffusion using tinygrad

License:MIT License


Languages

Language:JavaScript 99.1%Language:HTML 0.9%