SBlachuta's starred repositories

graphrag-local-ollama

Local models support for Microsoft's graphrag using ollama (llama3, mistral, gemma2 phi3)- LLM & Embedding extraction

Language:PythonLicense:MITStargazers:276Issues:0Issues:0

graphrag

A modular graph-based Retrieval-Augmented Generation (RAG) system

Language:PythonLicense:MITStargazers:13083Issues:0Issues:0

WilmerAI

A python application that routes incoming prompts to an LLM by category, and can support a single incoming connection from a front end to many backend connections to LLMs, allowing one AI Assistant to be powered by many models.

Language:PythonLicense:GPL-3.0Stargazers:51Issues:0Issues:0

SELFGOAL

Source code for our paper: "SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals".

Language:PythonStargazers:56Issues:0Issues:0

LARS

An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.

Language:PythonLicense:AGPL-3.0Stargazers:372Issues:0Issues:0

sglang

SGLang is yet another fast serving framework for large language models and vision language models.

Language:PythonLicense:Apache-2.0Stargazers:3248Issues:0Issues:0

ollama_proxy_server

A proxy server for multiple ollama instances with Key security

Language:PythonLicense:Apache-2.0Stargazers:211Issues:0Issues:0

PraisonAI

PraisonAI application combines AutoGen and CrewAI or similar frameworks into a low-code solution for building and managing multi-agent LLM systems, focusing on simplicity, customisation, and efficient human-agent collaboration. Chat with your ENTIRE Codebase.

Language:PythonLicense:MITStargazers:1843Issues:0Issues:0

open-webui

User-friendly WebUI for LLMs (Formerly Ollama WebUI)

Language:SvelteLicense:MITStargazers:33099Issues:0Issues:0

ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

Language:PythonLicense:Apache-2.0Stargazers:32253Issues:0Issues:0

promptflow

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.

Language:PythonLicense:MITStargazers:8914Issues:0Issues:0

twinny

The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but completely free and 100% private.

Language:TypeScriptLicense:MITStargazers:2376Issues:0Issues:0

functionary

Chat language model that can use tools and interpret the results

Language:PythonLicense:MITStargazers:1247Issues:0Issues:0

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Language:PythonLicense:BSD-3-ClauseStargazers:7827Issues:0Issues:0

aide

LLM shell and document interogator

Language:PythonStargazers:13Issues:0Issues:0

SWE-agent

SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It solves 12.47% of bugs in the SWE-bench evaluation set and takes just 1 minute to run.

Language:PythonLicense:MITStargazers:12130Issues:0Issues:0

pip-library-etl

This Python package simplifies generating documentation for functions and methods in designated modules or libraries. It enables effortless function call generation from natural language input or existing signatures, and facilitates crafting new ones through the integrated model. Beyond documentation, it seamlessly generates sophisticated SQL too.

Language:PythonLicense:MITStargazers:57Issues:0Issues:0

bllama

1.58-bit LLaMa model

Language:PythonLicense:MITStargazers:77Issues:0Issues:0

chatgpt-prompts-for-academic-writing

This list of writing prompts covers a range of topics and tasks, including brainstorming research ideas, improving language and style, conducting literature reviews, and developing research plans.

Stargazers:2657Issues:0Issues:0

ollama-grid-search

A multi-platform desktop application to evaluate and compare LLM models, written in Rust and React.

Language:TypeScriptLicense:MITStargazers:357Issues:0Issues:0

MetaGPT

🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming

Language:PythonLicense:MITStargazers:41925Issues:0Issues:0

generative-ai-workbook

Central repository for all LLM development

Language:PythonStargazers:167Issues:0Issues:0

gpt_academic

为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, moss等。

Language:PythonLicense:GPL-3.0Stargazers:62662Issues:0Issues:0
Language:RLicense:NOASSERTIONStargazers:177Issues:0Issues:0

chatAI4R

chatAI4R: Chat-Based Interactive Artificial Intelligence for R

Language:HTMLStargazers:8Issues:0Issues:0

continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains

Language:TypeScriptLicense:Apache-2.0Stargazers:13820Issues:0Issues:0

LocalAgents

These agents work based on any local model. You ask your question and simply indicate the number of agents and experts who will answer it.

Language:PythonStargazers:17Issues:0Issues:0

GPTFast

Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.

Language:PythonLicense:Apache-2.0Stargazers:665Issues:0Issues:0

Awesome-Graph-LLM

A collection of AWESOME things about Graph-Related LLMs.

License:MITStargazers:1520Issues:0Issues:0

ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.

Language:PythonLicense:Apache-2.0Stargazers:6321Issues:0Issues:0