There are 28 repositories under reasoning topic.
Hierarchical Reasoning Model Official Release
A visual playground for agentic workflows: Iterate over your agents 10x faster
From Chain-of-Thought prompting to OpenAI o1 and DeepSeek-R1 🍓
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI (Kunlun Inc.), specializing in vision-language reasoning.
Awesome Reasoning LLM Tutorial/Survey/Guide
A Survey of Reinforcement Learning for Large Reasoning Models
Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
Implement a reasoning LLM in PyTorch from scratch, step by step
[Embodied-AI-Survey-2025] Paper List and Resource Repository for Embodied AI
A collection of research on knowledge graphs
GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
拼好RAG:手搓并融合了GraphRAG、LightRAG、Neo4j-llm-graph-builder进行知识图谱构建以及搜索;整合DeepSearch技术实现私域RAG的推理;自制针对GraphRAG的评估框架| Integrate GraphRAG, LightRAG, and Neo4j-llm-graph-builder for knowledge graph construction and search. Combine DeepSearch for private RAG reasoning. Create a custom evaluation framework for GraphRAG.
[NeurIPS 2025] 🌐 WebThinker: Empowering Large Reasoning Models with Deep Research Capability
Latest Advances on System-2 Reasoning
WFGY 2.0. Semantic Reasoning Engine for LLMs (MIT). Fixes RAG/OCR drift, collapse & “ghost matches” via symbolic overlays + logic patches. Autoboot; OneLine & Flagship. ⭐ Star if you explore semantic RAG or hallucination mitigation.
Protege Desktop
Understanding R1-Zero-Like Training: A Critical Perspective
A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research group: https://samwald.info/
[ACL 2023] Reasoning with Language Model Prompting: A Survey
Learn about Machine Learning and Artificial Intelligence
Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey
AI tutor powered by Theory-of-Mind reasoning
Pretraining and inference code for a large-scale depth-recurrent language model
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
Neuro-Symbolic AI with Knowledge Graph | "True Reasoning" through data and logic 🌿🌱🐋🌍
✨✨Latest Papers and Benchmarks in Reasoning with Foundation Models