This repo includes ChatGPT prompt curation to use ChatGPT better.
Awesome-LLM: a curated list of Large Language Model
A rule-based tunnel for Android.
[COLING 2022] CSL: A Large-scale Chinese Scientific Literature Dataset 中文科学文献数据集
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Code for the paper "Language Models are Unsupervised Multitask Learners"
GPT based autonomous agent that does online comprehensive research on any given topic
⛓️ Langflow is a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows.
LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
Tuning LLMs with no tears💦, sharing LLM-tools with love❤️.
【LLMs九层妖塔】分享 LLMs在自然语言处理（ChatGLM、Chinese-LLaMA-Alpaca、小羊驼 Vicuna、LLaMA、GPT4ALL等）、信息检索（langchain）、语言合成、语言识别、多模态等领域（Stable Diffusion、MiniGPT-4、VisualGLM-6B、Ziya-Visual等）等 实战与经验。
🤖 Lobe Chat - an open-source, high-performance chatbot framework that supports speech synthesis, multimodal, and extensible Function Call plugin system. Supports one-click free deployment of your private ChatGPT/LLM web application.
《Machine Learning Systems: Design and Implementation》- Chinese Version
🐙 Guides, papers, lecture, notebooks and resources for prompt engineering
Tensors and Dynamic neural networks in Python with strong GPU acceleration
💬 Open source machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Robust Speech Recognition via Large-Scale Weak Supervision