Jiaang Li's starred repositories
Awesome-Interpretability-in-Large-Language-Models
This repository collects all relevant resources about interpretability in LLMs
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Multimodal-AND-Large-Language-Models
Paper list about multimodal and large language models, only used to record papers I read in the daily arxiv for personal needs.
bib_cleaner
Bib Cleaner removes unnecessary entries from your bib files.
build-nanogpt
Video+code lecture on building nanoGPT from scratch
aclpubcheck
Tools for checking ACL paper submissions
annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
arxiv-latex-cleaner
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
LLM-Finetuning
LLM Finetuning with peft
big_vision
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
representation-engineering
Representation Engineering: A Top-Down Approach to AI Transparency
arxiv-submission-sanitizer-flattener
Simple Python scripts to clean up and flatten ArXiv LaTeX submissions.
TransformerLens
A library for mechanistic interpretability of GPT-style language models
world-models
Extracting spatial and temporal world models from LLMs
llm-action
本项目旨在分享大模型相关技术原理以及实战经验。
THINGS-data
THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior
langchain-
LangChain 的中文入门教程
Mr.-Ranedeer-AI-Tutor
A GPT-4 AI Tutor Prompt for customizable personalized learning experiences.
RecurrentGPT
Official Code for Paper: RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text
MathTranslate
translate scientific papers in latex, especially arxiv papers