There are 5 repositories under phi-2 topic.
Sample to envision intelligent apps with Microsoft's Copilot stack for AI-infused product experiences.
Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型,支持接入langchain加载本地知识库做检索增强生成RAG。Training your own Phi2 small chat model from scratch.
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Collection of Basic Prompt Templates for Various Chat LLMs (Chat LLM 的基础提示模板集合)
Examples of RAG using LangChain with local LLMs - Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Microsoft Phi 2 Streamlit App, deployed on HuggingFace Spaces is based on the Microsoft Phi 2 small language model (SLM) for text generation.
Test server code for Phi-2 model. support OpenAI API spec
Build a Conversational AI System that can answer questions by retrieving the answers from a document.
Examples of RAG using Llamaindex with local LLMs in Linux - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
Fine tune Phi 2 for persona grounded chat
Simple LLM Rest API using Rust, Warp and Candle. Dedicated for quantized version of either phi-2 ( default) , Mistral, or Llama. Work using CPU or CUDA
This repository contains a Python script for a Telegram bot that integrates with OpenAI's API or other compatible REST APIs (such as Jan https://jan.ai/). It's designed to provide an interactive AI experience through Telegram, using simple chat functionalities.
This repository contains the source code used for finetuning the LLM phi-2 with several frameworks, such as DPO.
Co:Here Inference configurations
FineTune Microsoft Phi-2 with your own data
Flask API for generating text with the Phi-2 model from Hugging Face Transformers.
Colab notebook for finetuning Microsoft's Phi-2-3B LLM for solving mathematical word problems using QLoRA
This chatbot was created using Microsoft's 2.7 billion parameter phi-2 Transformer LLM