naimkatiman / Llama-3.1-8B-Free-Version

Set up Ollama locally with a QA component using lightrag library on Windows.

Home Page:https://colab.research.google.com/drive/17N6J0QVsexJ68mt1n66M3dVR7Z29S3mf?usp=sharing

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Sample output on Google Collab

Docker Containerization Screenshot

Ollama Setup

This project sets up the Ollama API server and integrates it with a simple Question-Answer (QA) component using the lightrag library.

Table of Contents

Introduction

The Ollama API server is configured to serve the llama3.1:8b model. This project demonstrates how to set up the server, pull the model, and use a QA component to generate responses based on user input.

Installation

Prerequisites

  • Python 3.x
  • Git for Windows
  • pip package manager
  • Chocolatey (for installing Curl)

Steps

  1. Clone the Repository:

    git clone https://github.com/<username>/ollama-setup.git
    cd ollama-setup
  2. Install Required Packages: (For Windows)

    • Open Command Prompt or PowerShell as Administrator and run the following commands:
    choco install curl
    curl -fsSL https://ollama.com/install.sh -o install.sh
    bash install.sh
    pip install -U lightrag[ollama]

Usage

  1. Run the Script:

    • Open Command Prompt or PowerShell in the project directory and run:
    python ollama_setup.py
  2. Interact with the QA Component:

    The script initializes the Ollama API server and pulls the llama3.1:8b model. It sets up a simple QA component to generate responses based on user input.

  3. Example:

    When prompted, enter a question like:

    How do you born?
    

    The component will generate and display an answer.

Contributing

Contributions are welcome! Please fork the repository and create a pull request with your changes.

License

This project is licensed under the MIT License. See the LICENSE file for details.

I made additional step on the proces to set up the environment in docker containerization and later deploy it on OpenWebUi and access Ollama from any device using NGROK

Docker Containerization Screenshot

NGROK Screenshot

User Interface on OpenWebui Screenshot

About

Set up Ollama locally with a QA component using lightrag library on Windows.

https://colab.research.google.com/drive/17N6J0QVsexJ68mt1n66M3dVR7Z29S3mf?usp=sharing


Languages

Language:Python 100.0%