wangjiajun0806's repositories
Babylon.js
Babylon.js is a powerful, beautiful, simple, and open game and rendering engine packed into a friendly JavaScript framework.
chatgpt-clone
ChatGPT interface with better UI + running on free gpt api's
chatgpt-on-wechat
Wechat robot based on ChatGPT, which using OpenAI api and itchat library. 使用ChatGPT搭建微信聊天机器人,基于GPT3.5/4.0 API和itchat实现,能处理文本、语音和图片,访问操作系统和互联网。
agentkit
Starter-kit to build constrained agents with Nextjs, FastAPI and Langchain
auto-j
Generative Judge for Evaluating Alignment
core
.NET news, announcements, release notes, and more!
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
EIPs
The Ethereum Improvement Proposal repository
HybridQA
Dataset and code for EMNLP2020 paper "HybridQA: A Dataset of Multi-Hop Question Answeringover Tabular and Textual Data"
InfiniteBench
100k+ Long-Context Benchmark for Large Language Models (paper upcoming)
landmark-attention
Landmark Attention: Random-Access Infinite Context Length for Transformers
LEval
Data and code for L-Eval, a comprehensive long context language models evaluation benchmark
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
mediamtx
Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy and record video and audio streams.
milvus
A cloud-native vector database, storage for next generation AI applications
mixtral-offloading
Run Mixtral-8x7B models in Colab or consumer desktops
PoSE
Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length
runc
CLI tool for spawning and running containers according to the OCI specification
sac3
SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
StyleTTS2
StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models
test
Measuring Massive Multitask Language Understanding | ICLR 2021
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
UHGEval
Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation