distardao / rasa-demo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Contributors Forks Stargazers Issues GNU V3 License


Logo

Sara

The document describes in detail how to install and use Sara bot
Document details »

Bugs report · Feature request

Table of contents

Introduction

The purpose of this repo is to showcase a contextual AI assistant built with the open source Rasa framework.

Sara is an alpha version and lives in our docs, helping developers getting started with our open source tools. It supports the following user goals:

  • Understanding the Rasa framework
  • Getting started with Rasa
  • Answering some FAQs around Rasa
  • Directing technical questions to specific documentation
  • Subscribing to the Rasa newsletter
  • Requesting a call with Rasa's sales team
  • Handling basic chitchat

(Back to top)

Technology

This bot is built on following technologies:

(Back to top)

Getting started

Requirements

Before you start, make sure your environment meet following requirements:

  • Python (>= 3.8.10 - tested)
  • Docker (>= 20.10.17 - tested)
  • Virtualenv, for development process (>= 20.13.1 - tested)
  • Ngrok (>= 3.0.0 - tested)

Here are steps for setup:

  • Install Python

  • Install Virtualenv

    sudo apt install python3-pip # if you don't have pip
    sudo apt install virtualenv
  • Install Docker engine

  • Install ngrok

    snap install ngrok
    ngrok config add-authtoken <token> # You can get this token from your ngrok account https://dashboard.ngrok.com/get-started/setup

(Back to top)

Installation

Install from repo

  1. Clone this repo

    git clone https://github.com/your_username_/rasa-demo # you can clone from your forked repo or main repo
  2. Navigate to the main directory

  3. Create new virtual environment (Inside project directory)

    virtualenv ./.venv # The name of virtual environment should be ".venv" to prevent pushing this directory to git server
    source .venv/bin/activate # Activate this environment
  4. Install all required packages

    pip install -r requirements.txt
  5. Create new "credentials.yml" file and add following data:

    telegram:
      access_token: "<your telegram bot access token>"
      verify: "<your telegram bot name>"
      webhook_url: "<ngrok url>/webhooks/telegram/webhook" # You can get this url after running ngrok service, example: https://702c-85-203-21-21.ap.ngrok.io

(Back to top)

Install from docker

  1. Create new network

    sudo docker network create rasa-bot-demo
  2. Create new "credentials.yml" file and add following data:

    telegram:
      access_token: "<your telegram bot access token>"
      verify: "<your telegram bot name>"
      webhook_url: "<ngrok url>/webhooks/telegram/webhook" # You can get this url after running ngrok service, example: https://702c-85-203-21-21.ap.ngrok.io
  3. Build rasa action server image:

    sudo docker build -t rasa-demo-action-server -f Dockerfile.rasaaction .

(Back to top)

Project structure

This project contains main files and directories:

  • actions: Folder containing logic of actions, you can define various reactions such as calling api, querying and returning data from database, ...
  • config.yml: This file defines the components and policies that your model will use to make predictions based on user input.
  • domain.yml: This file specifies the intents, entities, slots, responses, forms, and actions your bot should know about. It also defines a configuration for conversation sessions.
  • credentials.yml: This file define necessary credentials used when interacting with external system.
  • endpoints.yml: This file define all endpoints that the main server shoud know, such as action server, ...
  • Dockerfile.rasaaction: The docker file defines the environment that action server run from.
  • requirements.txt: This file contains packages needed to run project.

(Back to top)

Usage

Training

Train models from source

(*) At first, you need to activate your virtual environment

virtualenv ./.venv

For training, run this command:

rasa train

(Back to top)

Train models from docker

To train models, run this command:

sudo docker run -v $(pwd):/app rasa/rasa:3.2.0-full train --domain domain.yml --data data --out models

(*) If you want to train models faster, try the training command with --augmentation 0), the training process will ignore augmentation step.

(Back to top)

Run the whole system

Run from source

  1. Open new terminal, navigate to this project directory and activate virtual environment

  2. Run this command for the main server:

    rasa run
  3. Open another terminal, also in the current directory, run this command for rasa action server:

    rasa run actions --actions actions.actions
  4. Open the third terminal, run duckling service:

    sudo docker run -p 8000:8000 rasa/duckling
  5. Run ngrok:

    ngrok http 5005

(Back to top)

Run from docker

  1. Run main server by following command:

    sudo docker run --name=rasa-demo-server --net=rasa-bot-demo -p 5005:5005 -v $(pwd):/app rasa/rasa:3.2.0-full run
  2. Run action server by following command:

    sudo docker run --name=rasa-demo-action-server --net=rasa-bot-demo -p 5055:5055 rasa-action-server
  3. Run duckling service:

    sudo docker run --name=rasa-demo-duckling --net=rasa-bot-demo -p 8000:8000 rasa/duckling
  4. Run ngrok serivce:

    sudo docker run --net=rasa-bot-demo -it -e NGROK_AUTHTOKEN=<token> ngrok/ngrok:alpine http rasa-demo-server:5005 # Get this token from your ngrok account

(Back to top)

Contributing

If you want to contribute to this project, please do following steps:

  1. Fork this project
  2. Write your code and push them to your fork
  3. Create a merge request to the main project (main branch)
  4. Tag this account "distardao" in your merge request comment box

We all wellcome your fantastic ideas

(Back to top)

License

Distributed under the GNU GENERAL PUBLIC LICENSE V3. See LICENSE for more information.

(Back to top)

Contact

TODO

(Back to top)

About

License:GNU General Public License v3.0


Languages

Language:Python 97.8%Language:Makefile 2.2%