mlfoundations / open_lm

A repository for research on medium sized language models.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Use distributed when world_size=1 if requested

achalddave opened this issue · comments

Right now, we disable distributed functionality if we are using a single gpu: https://github.com/mlfoundations/open_lm/blob/main/open_lm/distributed.py#L20-L25. But if WORLD_SIZE is provided, we should behave as if we are in a distributed environment, which in turn will allow us to run tests that verify the distributed code paths without requiring multiple gpus.