tpoisonooo / HuixiangDou

HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

English | 简体中文

HuixiangDou is a group chat assistant based on LLM (Large Language Model).

Advantages:

  1. Design a two-stage pipeline of rejection and response to cope with group chat scenario, answer user questions without message flooding, see arxiv2401.08772
  2. Low cost, requiring only 1.5GB memory and no need for training
  3. Offers a complete suite of Web, Android, and pipeline source code, which is industrial-grade and commercially viable

Check out the scenes in which HuixiangDou are running and join WeChat Group to try AI assistant inside.

If this helps you, please give it a star ⭐

🔆 News

The web portal is available on OpenXLab, where you can build your own knowledge assistant without any coding, using WeChat and Feishu groups.

Visit web portal usage video on YouTube and BiliBili.

📖 Datasheet

Model Support File Format Support IM Application Support
  • pdf
  • word
  • excel
  • ppt
  • html
  • markdown
  • txt
  • WeChat
  • Lark
  • ..

📦 Hardware Requirements

The following are the hardware requirements for running. It is suggested to follow this document, starting with the basic version and gradually experiencing advanced features.

Version GPU Memory Requirements Features Tested on Linux
Experience Version 1.5GB Use openai API (e.g., kimi and deepseek) to handle source code-level issues
Free within quota
Basic Version 19GB Deploy local LLM can answer basic questions
Advanced Version 40GB Fully utilizing search + long-text, answer source code-level questions

🔥 Run

We will take mmpose and some pdf/word/excel/ppt examples to explain how to deploy the knowledge assistant to Feishu group chat.

STEP1. Establish Topic Feature Repository

Huggingface login

huggingface-cli login

Execute all the commands below (including the '#' symbol).

# Download the repo
git clone https://github.com/internlm/huixiangdou --depth=1 && cd huixiangdou

# Download chatting topics
mkdir repodir
git clone https://github.com/open-mmlab/mmpose --depth=1 repodir/mmpose
git clone https://github.com/tpoisonooo/huixiangdou-testdata --depth=1 repodir/testdata


# parsing `word` requirements
apt update
apt install python-dev libxml2-dev libxslt1-dev antiword unrtf poppler-utils pstotext tesseract-ocr flac ffmpeg lame libmad0 libsox-fmt-mp3 sox libjpeg-dev swig libpulse-dev
# python requirements
pip install -r requirements.txt

# save the features of repodir to workdir
mkdir workdir
python3 -m huixiangdou.service.feature_store

The first run will automatically download text2vec model, you can also manually download it and update model path in config.ini.

After running, HuixiangDou can distinguish which user topics should be dealt with and which chitchats should be rejected. Please edit good_questions and bad_questions, and try your own domain knowledge (medical, finance, electricity, etc.).

# Reject chitchat
reject query: What to eat for lunch today?
reject query: How to make HuixiangDou?

# Accept technical topics
process query: How to install mmpose ?
process query: What should I pay attention to when using research instruments?

STEP2. Run Basic Technical Assistant

Configure free TOKEN

HuixiangDou uses a search engine. Click Serper to obtain a quota-limited TOKEN and fill it in config.ini.

# config.ini
..
[web_search]
x_api_key = "${YOUR-X-API-KEY}"
..

Test Q&A Effect

[Experience Version] If your GPU memory is insufficient to locally run the 7B LLM (less than 15GB), try kimi or deepseek for 30 million free token. See config-2G.ini

# config.ini

[llm]
enable_local = 0
enable_remote = 1
..
[llm.server]
..
remote_type = "deepseek"
remote_api_key = "YOUR-API-KEY"
remote_llm_max_text_length = 16000
remote_llm_model = "deepseek-chat"

By default, with enable_local=1, the LLM will be automatically downloaded on your first run depending on GPU.

  • Non-docker users. If you don't use docker, you can start all services at once.

    # standalone
    python3 -m huixiangdou.main --standalone
    ..
    ErrorCode.SUCCESS,
    Query: Could you please advise if there is any good optimization method for video stream detection flickering caused by frame skipping?
    Reply:
    1. Frame rate control and frame skipping strategy are key to optimizing video stream detection performance, but you need to pay attention to the impact of frame skipping on detection results.
    2. Multithreading processing and caching mechanism can improve detection efficiency, but you need to pay attention to the stability of detection results.
    3. The use of sliding window method can reduce the impact of frame skipping and caching on detection results.
  • Docker users. If you are using docker, HuixiangDou's Hybrid LLM Service needs to be deployed separately.

    # First start LLM service listening the port 8888
    python3 -m huixiangdou.service.llm_server_hybrid
    ..
    ======== Running on http://0.0.0.0:8888 ========
    (Press CTRL+C to quit)

    Then open a new docker container, configure the host IP (not container IP) in config.ini, and run python3 -m huixiangdou.main

    # config.ini
    [llm]
    ..
    client_url = "http://10.140.24.142:8888/inference" # example, use your real host IP here
    
    # run
    python3 -m huixiangdou.main
    ..
    ErrorCode.SUCCESS

STEP3. Send to Feishu/Personal Wechat [Optional]

Click Create a Feishu Custom Robot to get the WEBHOOK_URL callback, and fill it in the config.ini.

# config.ini
..
[frontend]
type = "lark"
webhook_url = "${YOUR-LARK-WEBHOOK-URL}"

Run. After it ends, the technical assistant's reply will be sent to the Feishu group chat.

python3 -m huixiangdou.main --standalone # for non-docker users
python3 -m huixiangdou.main # for docker users

STEP4. Advanced Version [Optional]

The basic version may not perform well. You can enable these features to enhance performance. The more features you turn on, the better.

  1. Use higher accuracy local LLM

    Adjust the llm.local model in config.ini to internlm2-chat-20b. This option has a significant effect, but requires more GPU memory.

  2. Hybrid LLM Service

    For LLM services that support the openai interface, HuixiangDou can utilize its Long Context ability. Using kimi as an example, below is an example of config.ini configuration:

    # config.ini
    [llm]
    enable_local = 1
    enable_remote = 1
    ..
    [llm.server]
    ..
    # open https://platform.moonshot.cn/
    remote_type = "kimi"
    remote_api_key = "YOUR-KIMI-API-KEY"
    remote_llm_max_text_length = 128000
    remote_llm_model = "moonshot-v1-128k"

    We also support chatgpt. Note that this feature will increase response time and operating costs.

  3. Repo search enhancement

    This feature is suitable for handling difficult questions and requires basic development capabilities to adjust the prompt.

    • Click sourcegraph-account-access to get token

      # open https://github.com/sourcegraph/src-cli#installation
      sudo curl -L https://sourcegraph.com/.api/src-cli/src_linux_amd64 -o /usr/local/bin/src && chmod +x /usr/local/bin/src
      
      # Enable search and fill the token
      [worker]
      enable_sg_search = 1
      ..
      [sg_search]
      ..
      src_access_token = "${YOUR_ACCESS_TOKEN}"
    • Edit the name and introduction of the repo, we take opencompass as an example

      # config.ini
      # add your repo here, we just take opencompass and lmdeploy as example
      [sg_search.opencompass]
      github_repo_id = "open-compass/opencompass"
      introduction = "Used for evaluating large language models (LLM) .."
    • Use python3 -m huixiangdou.service.sg_search for unit test, the returned content should include opencompass source code and documentation

      python3 -m huixiangdou.service.sg_search
      ..
      "filepath": "opencompass/datasets/longbench/longbench_trivia_qa.py",
      "content": "from datasets import Dataset..

    Run main.py, HuixiangDou will enable search enhancement when appropriate.

  4. Tune Parameters

    It is often unavoidable to adjust parameters with respect to business scenarios.

🛠️ FAQ

  1. What if the robot is too cold/too chatty?

    • Fill in the questions that should be answered in the real scenario into resource/good_questions.json, and fill the ones that should be rejected into resource/bad_questions.json.
    • Adjust the theme content in repodir to ensure that the markdown documents in the main library do not contain irrelevant content.

    Re-run feature_store to update thresholds and feature libraries.

    ⚠️ You can directly modify reject_throttle in config.ini. Generally speaking, 0.5 is a high value; 0.2 is too low.

  2. Launch is normal, but out of memory during runtime?

    LLM long text based on transformers structure requires more memory. At this time, kv cache quantization needs to be done on the model, such as lmdeploy quantization description. Then use docker to independently deploy Hybrid LLM Service.

  3. How to access other local LLM / After access, the effect is not ideal?

  4. What if the response is too slow/request always fails?

    • Refer to hybrid llm service to add exponential backoff and retransmission.
    • Replace local LLM with an inference framework such as lmdeploy, instead of the native huggingface/transformers.
  5. What if the GPU memory is too low?

    At this time, it is impossible to run local LLM, and only remote LLM can be used in conjunction with text2vec to execute the pipeline. Please make sure that config.ini only uses remote LLM and turn off local LLM.

  6. No module named 'faiss.swigfaiss_avx2' locate installed faiss package

    import faiss
    print(faiss.__file__)
    # /root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/faiss/__init__.py

    add soft link

    # cd your_python_path/site-packages/faiss
    cd /root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/faiss/
    ln -s swigfaiss.py swigfaiss_avx2.py

🍀 Acknowledgements

📝 Citation

@misc{kong2024huixiangdou,
      title={HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance},
      author={Huanjun Kong and Songyang Zhang and Kai Chen},
      year={2024},
      eprint={2401.08772},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

About

HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 70.3%Language:TypeScript 21.2%Language:Kotlin 4.1%Language:Less 3.4%Language:JavaScript 0.7%Language:HTML 0.2%Language:Shell 0.1%