Lightning-AI / litgpt

Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.

Home Page:https://lightning.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

XLA Pod Request

opooladz opened this issue · comments

Thank you for the repo.

I am wondering if a recipe for TPU pods can be added. I have access to v4-32 and want to train a LLaMA model from scratch. Wondering if the repo can be extended for this use case.

Thanks

We have support for a limited set of scripts at https://github.com/Lightning-AI/litgpt/tree/main/xla. Give it a shot, it should work with v4-32. Some info may be outdated

I mostly see fine-tuning code in those scripts. But there seems to be a general lack of pytorch xla examples which are full from scratch solutions.