Ricardo's repositories
alpaca-lora
Instruct-tune LLaMA on consumer hardware
autocut
用文本编辑器剪视频
Baichuan2
A series of large language models developed by Baichuan Intelligent Technology
ChatGLM-6B
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
chatglm.cpp
C++ implementation of ChatGLM-6B & ChatGLM2-6B
code-act
Official Repo for paper "Executable Code Actions Elicit Better LLM Agents" by Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji.
conf-paper-cnt
count the most frequency author, title in one conf
EasyInstruct
An Easy-to-use Framework to Instruct Large Language Models.
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and FastChat-T5.
InstructUIE
Universal information extraction with instruction learning
KQAPro_Baselines
Pytorch implementation of baseline models of KQA Pro, a large-scale dataset of complex question answering over knowledge base.
MyMiniLLaMA
A toy LLaMA code, simplified from huggingface transformers
nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
pik
Probing language models to evaluate their confidence and calibration.
rome
Locating and editing factual associations in GPT (NeurIPS 2022)
ScriptEventExtraction
Script event extraction via AMR parser.
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
videocr
Extract hardcoded subtitles from videos using machine learning
vLLM-qa-inference
This repo contains the code for the inference of LLMs on the QA task with In-Context Learning. (based on vLLM)