There are 1 repository under local-llm-integration topic.
An LLM driven recommendation system based on Radarr and Sonarr library or watch history information
Run large language models like Qwen and LLaMA locally on Android for offline, private, real-time question answering and chat - powered by ONNX Runtime.
🚀 A powerful Flutter-based AI chat application that lets you run LLMs directly on your mobile device or connect to local model servers. Features offline model execution, Ollama/LLMStudio integration, and a beautiful modern UI. Privacy-focused, cross-platform, and fully open source.
A framework for using local LLMs (Qwen2.5-coder 7B) that are fine-tuned using RL to generate, debug, and optimize code solutions through iterative refinement.
A fully customizable, super light-weight, cross-platform GenAI based Personal Assistant that can be run locally on your private hardware!
🤖 An Intelligent Chatbot: Powered by the locally hosted Ollama 3.2 LLM 🧠 and ChromaDB 🗂️, this chatbot offers semantic search 🔍, session-aware responses 🗨️, and an interactive Streamlit interface 🎨 for seamless user interaction. 🚀
🖼️ Python Image and 🎥 Video Generator using LLM providers and models — built in JetBrains PyCharm with AI help from Junie 🤖, Claude Code 💻, Codex CLI 📝, Gemini CLI 🌌, and others — FREE & Open-Source forever 🚀
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
An autonomous AI agent for intelligently updating, maintaining, and curating a LightRAG knowledge base.
LLM driven recommendation system based on Radarr and Sonarr library or watch history information
An AI-powered assistant to streamline knowledge management, member discovery, and content generation across Telegram and Twitter, while ensuring privacy with local LLM deployment.
PlantDeck is an offline herbal RAG that indexes your PDF books and monographs, extracts text/images with OCR, and answers questions with page-level citations using a local LLM via Ollama. Runs on your machine; no cloud. Field guide only; not medical advice.
An AI-powered system for extracting cancer-related information from patient Electronic Health Record (EHR) notes
AI-powered Bash terminal built with Python, Tkinter, tkterm, using local LLM through LMStudio for natural language command generation; features whitelist/blacklist management, intuitive interface.
**Ask CLI** is a command-line tool for interacting with a local LLM (Large Language Model) server. It allows you to send queries and receive concise command-line responses.
AI-powered code and idea assistant for developers: local-first, doc-aware, and fully test-automated.
A modular AI assistant ecosystem with voice/text interfaces, RAG capabilities, command execution, and integrated applications for radio streaming, web browsing, and document editing.
Privacy-First Local AI Chat for VS Code which is also Beautiful
A project to design, test, and optimize prompts for AI models to improve output quality and relevance.
a vs code extension , to run local llms for code assistance
SynthCerebrum is a fully offline, AI-powered assistant that reads and learns from all types of local files. Using advanced neural networks, embeddings, and RAG, it intelligently retrieves, synthesizes, and generates insights from your data, making your folder a brain-like knowledge hub.
UNOFFICIAL Simple LM Studio Web UI (Docker)
Local Retrieval-Augmented Generation (RAG) pipeline using LangChain and ChromaDB to query PDF files with LLMs.