kuprel / min-dalle

min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add PyPI install

davidmezzetti opened this issue · comments

First off, great project, thank you for building this!

Have you considered building a package for PyPI? The models are available on the Hugging Face Hub and the Hugging Face Hub package can be used to programmatically download/cache the models.

Another idea is that encoder.pt and decoder.pt could also be uploaded to a new model on the Hub so they don't need to be dynamically converted at runtime for the Torch model.

One last thing to consider is changing prints to log statements, so they could be turned off if desired.

General idea is someone can run pip install then be up and running regardless of the runtime platform (Windows/Linux/macOS).

Thanks! I like your idea of uploading the pt files to the HF hub, doing that now. It will save a lot of time converting from flax and get rid of a lot of ugly conversion code. Will look into the PyPI install

Sounds great!

I could see it where the default install_requires only needs torch and huggingface-hub. You could add optional dependencies for flax. That way if someone just wants to use the torch models, they only need torch installed.

The colab now runs purely in torch and downloads pre-converted pytorch parameters from the HF hub. Saves a lot of time.

That's great! Definitely seems simpler now.

If you wanted to programmatically download models vs depending on running curl you could add the huggingface_hub library and do something like this:

from huggingface_hub import hf_hub_url, cached_download

config_file_url = hf_hub_url("kuprel/min-dalle", filename="vocab.json")
cached_download(config_file_url)

https://huggingface.co/docs/huggingface_hub/how-to-downstream

This would also open up option to parameterize the model loaded (defaulting to yours) and let someone load an alternative model.

Could also consider having two hf model projects, one for mini and one for mega.

I added the package to PyPI and now the entire setup process is pip install min-dalle. The model parameters are downloaded as needed with just the requests library. Thanks for the suggestions. I should probably also add a flag for verbose to prevent the print statements

awesome thank you for doing this @kuprel

This is great! Thank you for doing this. I also see the verbose flag now. I'll close this issue.

I've used your work for an example notebook that generates an image from a webpage summary. You can see it here. Thanks again for creating this library!

https://colab.research.google.com/github/neuml/txtai/blob/master/examples/35_Pictures_are_worth_a_thousand_words.ipynb
https://dev.to/neuml/pictures-are-a-worth-a-thousand-words-144k

Wow that's really cool

Thanks. Good luck with this project, looking forward to seeing it grow!