Giters
hpcaitech
/
PaLM-colossalai
Scalable PaLM implementation of PyTorch
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
191
Watchers:
13
Issues:
13
Forks:
28
hpcaitech/PaLM-colossalai Issues
torch.distributed.elastic.multipro cessing.errors.ChildFailedError
Updated
a year ago
ModuleNotFoundError: No module named 'torch._six'
Updated
a year ago
Can I run this on one rtx 4070 ti?
Updated
a year ago
There should be a version mismatching problem
Closed
2 years ago
Comments count
4
Fails with cannot import colo_set_process_memory_fraction in Docker
Updated
2 years ago
Warning When Using Different HuggingFace Datasets
Closed
2 years ago
Comments count
2
Gemini badcase
Closed
2 years ago
Comments count
5
Gemin+2.5D badcase
Updated
2 years ago
bash ./tools/download_token.py </PATH/TO/TOKENIZER/>
Closed
2 years ago
[feature] add model checkpointing
Updated
2 years ago
[feature] Add performance and scalability results
Updated
2 years ago
GitHub encourages communication and ongoing review.
Closed
2 years ago
Comments count
2
Have you really reproduced PaLM or just joking?
Closed
2 years ago
Comments count
2