There are 3 repositories under llamaindex topic.
LlamaIndex is a data framework for your LLM applications
🚀 Introducing 🐪 CAMEL: a game-changing role-playing approach for LLMs and auto-agents like BabyAGI & AutoGPT! Watch two agents 🤝 collaborate and solve tasks together, unlocking endless possibilities in #ConversationalAI, 🎮 gaming, 📚 education, and more! 🔥
Open source guides/codes for mastering deep learning to deploying deep learning in production in PyTorch, Python, Apptainer, and more.
Large Language Models (LLMs) tutorials & sample scripts, ft. langchain, openai, llamaindex, gpt, chromadb & pinecone
LLPhant - A comprehensive PHP Generative AI Framework using OpenAI GPT 4. Inspired by Langchain
LangStream. Event-Driven Developer Platform for Building and Running LLM AI Apps. Powered by Kubernetes and Kafka.
Learn to build and deploy AI apps.
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
✨ Local Zapier replacement written in Rust to make local AI do way more than chat
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
A collection of personally developed projects contributing towards the advancement of Artificial General Intelligence(AGI)
Timescale Vector Cookbook. A collection of recipes to build applications with LLMs using PostgreSQL and Timescale Vector.
The library for character-driven AI experiences.
Local llamaindex RAG to assist researchers quickly navigate research papers
A comprehensive compendium of GPT actions, providing developers and AI enthusiasts with free and open-source integrations with leading Large Language Models.
An AI-powered equity research analyst demo using Large Language Models to analyze 10-K filings of renowned NYSE listed companies.
LLM Chatbot w/ Retrieval Augmented Generation using Llamaindex
BigBertha is an architecture design that demonstrates how automated LLMOps (Large Language Models Operations) can be achieved on any Kubernetes cluster using open source container-native technologies 🌟
Designed for offline use, this RAG application template is based on Andrej Baranovskij's tutorials. It offers a starting point for building your own local RAG pipeline, independent of online APIs and cloud-based LLM services like OpenAI.
This repository contains code for how to store and query your own data using OpenAI Embeddings and Supabase using JavaScript.
Overview and tutorials of the LlamaIndex Library
API to load and query documents using RAG
LLM Agent that performs sentiment analysis of drawings and natural language using a combination of Google Gemini Vision model and GPT-4 Turbo with LlamaIndex.
Knowledge-Sharing Hub using RAG Q&A techniques with LLMs (Llama2 and ChatGPT)
Build a RAG preprocessing pipeline
Sync your team's data to your LLM applications in real-time
Examples of RAG using Llamaindex with local LLMs in Linux - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
SAIRA Project, Generative Artificial Intelligence, Fall 2023, Innopolis University