Litu Ou's repositories
evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
Language:PythonMIT000
scrolls_longt5_memory
The official code of "SCROLLS: Standardized CompaRison Over Long Language Sequences".
Language:PythonMIT000
unlimiformer_test
Public repo for the preprint "Unlimiformer: Long-Range Transformers with Unlimited Length Input"
Language:PythonMIT000