lucidrains / DALLE-pytorch

Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

faster inference

rom1504 opened this issue · comments

using caching @borzunov implemented 10x faster generation at https://github.com/learning-at-home/dalle-pytorch/pull/3/files

I think this could be useful

oh nice! yeah, I can get this done for both dalle and nuwa in one go when I get a free stretch of time

Hey! I'll make a PR to this repo with a finished version of this code today or tomorrow :)

🙏 🙏

I think the code is ready, see #409. This git branch is based on the branch from #408 (however, there's no direct dependency, so one can use cached inference without merging code for sharing weights).

@borzunov merged! thank you for this amazing contribution!