Yizhong Wang's repositories
self-instruct
Aligning pretrained language models with instruction data generated by themselves.
Tk-Instruct
Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.
promptsource
Toolkit for collecting and applying prompts
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
yizhongw.github.io
My Homepage.
cracking-the-data-science-interview
A Collection of Cheatsheets, Books, Questions, and Portfolio For DS/ML Interview Prep
ds-cheatsheets
List of Data Science Cheatsheets to rule the world
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
YoudaoTranslator
Alfred Youdao Translate Workflow
bi-att-flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
FActScore
A package to evaluate factuality of long-form generation. Original implementation of "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).
natural-instructions
Expanding natural instructions
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
reward-bench
RewardBench: the first evaluation tool for reward models.
wechat_articles_spider
微信公众号文章的爬虫