There are 4 repositories under llama-2 topic.
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
中文羊驼大模型三期项目 (Chinese Llama-3 LLMs) developed from Meta Llama 3
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support LLaMA, Llama-2, BLOOM, Vicuna, Baichuan, etc.
kani (カニ) is a highly hackable microframework for chat-based language models with tool use/function calling. (NLP-OSS @ EMNLP 2023)
improve Llama-2's proficiency in comprehension, generation, and translation of Chinese.
Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
InsightSolver: Colab notebooks for exploring and solving operational issues using deep learning, machine learning, and related models.
[KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model
Docker image for LLaVA: Large Language and Vision Assistant
LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces
📚 Local PDF-Integrated Chat Bot: Secure Conversations and Document Assistance with LLM-Powered Privacy
This package simplifies your interaction with various GPT models, removing the need for tokens or other methods to access GPT
Chat to LLaMa 2 that also provides responses with reference documents over vector database. Locally available model using GPTQ 4bit quantization.
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Introducing Project Zephyrine: Elevating Your Interaction Plug and Play, and Employing GPU Acceleration within a Modernized Automata Local Graphical User Interface.
LLM Security Project with Llama Guard
This package simplifies your interaction with various GPT models, removing the need for tokens or other methods to access GPT
Examples of RAG using Llamaindex with local LLMs in Linux - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
The course provides guidance on best practices for prompting and building applications with the powerful open commercial license models of Llama 2.
Kickstart with LLMs
This project streamlines the fine-tuning process, enabling you to leverage Llama-2's capabilities for your own projects.
Embark on the "Reinforcement Learning from Human Feedback" course and align Large Language Models (LLMs) with human values.
A Streamlit app for document question answering and text summarization.
Retrieval-Augmented Generation (RAG) to analyze and respond to queries about President Biden's 2023 State of the Union (SOTU) Address
Discussed about 4 use-cases or case studies. Discussed about the approaches and significance of these use-cases as these are different from others. There are several approaches available which can be done using LLM but here the approaches and it's significance could bring insightful approaches towards it's execution.
The common setup to run LLM locally. Use llama-cpp to quantize model, Langchain for setup model, prompts, RAG, and Gradio for UI.