There are 24 repositories under chain-of-thought topic.
:sparkles::sparkles:Latest Advances on Multimodal Large Language Models
The official GitHub page for the survey paper "A Survey of Large Language Models".
A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.
A high-performance LLM inference API and Chat UI that integrates DeepSeek R1's CoT reasoning traces with Anthropic Claude models.
Eko (Eko Keeps Operating) - Build Production-ready Agentic Workflow with Natural Language - eko.fellou.ai
The easiest tool for fine-tuning LLM models, synthetic data generation, and collaborating on datasets.
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
总结Prompt&LLM论文,开源数据&模型,AIGC应用
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates. - Professor Yu Liu
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
[ACL 2023] Reasoning with Language Model Prompting: A Survey
Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey
BabyAGI: an Autonomous and Self-Improving agent, or BASI
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
This repository contains a collection of papers and resources on Reasoning in Large Language Models.
ReasonFlux Series - ReasonFlux, ReasonFlux-PRM and ReasonFlux-Coder
[ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
An awesome repository & A comprehensive survey on interpretability of LLM attention heads.
Paper list for Efficient Reasoning.
Datasets for Instruction Tuning of Large Language Models
Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23
🎁[ChatGPT4NLU] A Comparative Study on ChatGPT and Fine-tuned BERT
Building open version of OpenAI o1 via reasoning traces (Groq, ollama, Anthropic, Gemini, OpenAI, Azure supported) Demo: https://huggingface.co/spaces/pseudotensor/open-strawberry
Explore concepts like Self-Correct, Self-Refine, Self-Improve, Self-Contradict, Self-Play, and Self-Knowledge, alongside o1-like reasoning elevation🍓 and hallucination alleviation🍄.
Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"
We introduce new zero-shot prompting magic words that improves the reasoning ability of language models: panel discussion!
Seed, Code, Harvest: Grow Your Own App with Tree of Thoughts!
Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.
Awesome deliberative prompting: How to ask LLMs to produce reliable reasoning and make reason-responsive decisions.