dsj96's repositories
PPR-master
This is an implementation of the POI recommendation model-PPR.
alpaca-lora
Instruct-tune LLaMA on consumer hardware
AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
bert_score
BERT score for text generation
Book2_Beauty-of-Data-Visualization
Book_2_《可视之美》 | 鸢尾花书:从加减乘除到机器学习;开始上传PDF草稿、Jupyter笔记。文件还会经过至少两轮修改,改动会很大,大家注意下载最新版本。请多提意见,谢谢
camel
🐫 CAMEL: Communicative Agents for “Mind” Exploration of Large Language Model Society (NeruIPS'2023) https://www.camel-ai.org
ChatGPT-Next-Web
A well-designed cross-platform ChatGPT UI (Web / PWA / Linux / Win / MacOS). 一键拥有你自己的跨平台 ChatGPT 应用。
COMET
A Neural Framework for MT Evaluation
DPR
Dense Passage Retriever - is a set of tools and models for open domain Q&A task.
easy-rl
强化学习中文教程(蘑菇书),在线阅读地址:https://datawhalechina.github.io/easy-rl/
GPT-4-LLM
Instruction Tuning with GPT-4
joeynmt
Minimalist NMT for educational purposes
llama2.c
Inference Llama 2 in one file of pure C
MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
MetaGPT
🌟 The Multi-Agent Framework: Given one line Requirement, return PRD, Design, Tasks, Repo
mt-bigscience
Evaluation results for Machine Translation within the BigScience project
neural-compressor
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
Pareto-Mutual-Distillation
Implementation of Pareto-Mutual-Distillation (paper: Towards Higher Pareto Frontier in Multilingual Machine Translation)
peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
prize
A prize for finding tasks that cause large language models to show inverse scaling
Prompt-Engineering-Guide
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
ReAct
[ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
ReAgent
A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)
Rememberer
Rememberer & RLEM
SCM4LLMs
Self-Controlled Memory System for LLMs
trlx
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities