Any plans to supporting multi-GPU inferencing and training?
Shawn-Shan opened this issue · comments
First, I really really appreciate the effort on porting this to pytorch (saving me a lot of time learning JAX). But is there any plans to support multi-GPU inferencing and fine-tuning? It would really be helpful given how much compute these models need. Thanks!