QuivrHQ / quivr

Your GenAI Second Brain 🧠 A personal productivity assistant (RAG) βš‘οΈπŸ€– Chat with your docs (PDF, CSV, ...) & apps using Langchain, GPT 3.5 / 4 turbo, Private, Anthropic, VertexAI, Ollama, LLMs, Groq that you can share with users ! Local & Private alternative to OpenAI GPTs & ChatGPT powered by retrieval-augmented generation.

Home Page:https://quivr.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Feature]: Need a version of the docker image in the form of AllInOne

bigbenz8 opened this issue Β· comments

The Feature

As a user of a product, they prefer to experience or understand the product directly by using its features, no matter at what stage of the project/product development. The current list of issues with this project shows that there are a lot of installation issues involved (system compilation issues, network configuration issues, etc.), and I don't think this is what we want to see. It's time to make improvements, download and use directly, infinitely close to 0 configuration is a good idea (I think there should be some hardware adapted to install some operations in this system, but there is always a way ). All that needs to be done is to get the system up and running simply and trouble free, making installation and configuration as easy as eating.

Motivation, pitch

All that needs to be done is to get the system up and running simply and trouble free, making installation and configuration as easy as eating.

Twitter / LinkedIn details

No response

Hey @bigbenz8, great to see you back with another insightful suggestion! Hope you're doing well.

I understand your concern about the installation and configuration process of the quivr repository. You're right, an All-In-One Docker image could indeed simplify this process significantly.

Based on the Dockerfile provided in the repository, it's feasible to create such an image. The Dockerfile outlines a process for setting up an environment with Node.js and Python, installing dependencies, and preparing a Next.js application for execution. This setup encapsulates the environment setup, dependency resolution, and application build steps into a single, easily distributable unit, which aligns with the goal of simplifying installation and configuration for users. By using this Dockerfile, users can deploy the quivr application without manually installing Node.js, Python, and other dependencies on their systems, thus making the process as easy as possible. You can find the Dockerfile here.

As for the specific system compilation issues and network configuration issues that users have reported, I wasn't able to find any specific issues within the repository. However, I did find a similar closed issue titled a docker image would be very helpful, which suggests that other users have also expressed interest in a Docker-based solution.

Regarding the current steps required for installing and configuring the quivr repository, they are outlined in the installation guide. The steps involve cloning the repository, copying and updating the .env.example files, launching the project, and logging into the app. For updating Quivr, you would need to pull the latest changes and run migrations using the migration.sh script.

I hope this information is helpful. If you have any further questions or need more clarification, please let me know.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

I've also stopped using quivr due to poor documentation, app stability and complexity in troubleshooting or no response from devs. Even the online app is a hit and miss. I second this suggestion.

In any case, the tool is exactly what I was looking for. I built a new Ubuntu server, but with Docker and the many problems, I can't get it to run, like many others here....

Same, cant get it to run - 30 year IT veteran developer

Hey guys! Would love to get your issues and understand it more in depth so we can address it.

We made an AllInOne docker image but some people have a different configuration that breaks a few things.

Could you come to discord so we can chat about it ?

Hey guys! Would love to get your issues and understand it more in depth so we can address it.

We made an AllInOne docker image but some people have a different configuration that breaks a few things.

Could you come to discord so we can chat about it ?

Supabase is my primary issue. The frontend / backend are straightforward... However Supabase has hurdles. I can get it working on Windows but not Ubuntu.

Hello, supabase dev here. Is there any specific issue you are running into for local dev? I'm happy to take a closer look next week.

I am going to step thru the process and make a definitive list of what is not working for me and provide it here. This is with docker on ubuntu

First issue I get, when running the steps specified here : https://docs.quivr.app/developers/contribution/install

is with this step :
Step 5: Login to the app

Connect to the supabase database at http://localhost:8000/project/default/auth/users with the following credentials: admin/admin in order to create new users. Auto-confirm the email.

The error I am getting in my docker logs is :

backend-core | INFO: Started server process [8]
backend-core | INFO: Waiting for application startup.
backend-core | INFO: Application startup complete.
worker | [2024-04-20 21:44:29,250: INFO/MainProcess] Events of group {task} enabled by remote.
beat | [2024-04-20 21:45:00,003: INFO/MainProcess] Scheduler: Sending due task process_integration_brain_sync (celery_worker.process_integration_brain_sync)
worker | [2024-04-20 21:45:00,010: INFO/MainProcess] Task celery_worker.process_integration_brain_sync[314cdab3-69a2-49fb-a037-0c12f5cdc489] received
worker | [2024-04-20 21:45:00,074: ERROR/ForkPoolWorker-3] Task celery_worker.process_integration_brain_sync[314cdab3-69a2-49fb-a037-0c12f5cdc489] raised unexpected: ConnectError('[Errno -2] Name or service not known')
worker | Traceback (most recent call last):
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
worker | yield
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 233, in handle_request
worker | resp = self._pool.handle_request(req)
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
worker | raise exc from None
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
worker | response = connection.handle_request(
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
worker | raise exc
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
worker | stream = self._connect(request)
worker | ^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 122, in _connect
worker | stream = self._network_backend.connect_tcp(**kwargs)
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
worker | with map_exceptions(exc_map):
worker | File "/usr/local/lib/python3.11/contextlib.py", line 155, in exit
worker | self.gen.throw(typ, value, traceback)
worker | File "/usr/local/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
worker | raise to_exc(exc) from exc
worker | httpcore.ConnectError: [Errno -2] Name or service not known
worker |
worker | The above exception was the direct cause of the following exception:
worker |
worker | Traceback (most recent call last):
worker | File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 453, in trace_task
worker | R = retval = fun(*args, **kwargs)
worker | ^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 736, in protected_call
worker | return self.run(*args, **kwargs)
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/code/celery_worker.py", line 187, in process_integration_brain_sync
worker | integrations = integration.get_integration_brain_by_type_integration("notion")
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/code/modules/brain/repository/integration_brains.py", line 110, in get_integration_brain_by_type_integration
worker | .execute()
worker | ^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/postgrest/_sync/request_builder.py", line 58, in execute
worker | r = self.session.request(
worker | ^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 827, in request
worker | return self.send(request, auth=auth, follow_redirects=follow_redirects)
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 914, in send
worker | response = self._send_handling_auth(
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 942, in _send_handling_auth
worker | response = self._send_handling_redirects(
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
worker | response = self._send_single_request(request)
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1015, in _send_single_request
worker | response = transport.handle_request(request)
worker | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 232, in handle_request
worker | with map_httpcore_exceptions():
worker | File "/usr/local/lib/python3.11/contextlib.py", line 155, in exit
worker | self.gen.throw(typ, value, traceback)
worker | File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
worker | raise mapped_exc(message) from exc
worker | httpx.ConnectError: [Errno -2] Name or service not known
backend-core | INFO: 172.18.0.1:34738 - "OPTIONS /user HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.1:34754 - "OPTIONS /user/identity HTTP/1.1" 200 OK
backend-core | INFO: 172.18.0.1:34738 - "GET /user/identity HTTP/1.1" 403 Forbidden
backend-core | INFO: 172.18.0.1:34766 - "GET /user HTTP/1.1" 403 Forbidden

@micduffy I can reproduce this and it seems like a common place people trip up, partly because of a discrepancy in docs.

If you check the README.md of this repo, Step 4 is to run

supabase start

before any other docker commands. This is missing from Step 4 of the docs you linked.

Screenshot 2024-04-23 at 12 08 00β€―AM

Since you are using ubuntu, it would be easier to run npx supabase start because it also takes care of Step 0: supabase installation.

After that, backend-core starts up like a charm for me on ubuntu.

gitpod /workspace/quivr (main) $ docker compose up
[+] Running 6/0
 βœ” Container redis         Created                                                                                                               0.0s 
 βœ” Container beat          Created                                                                                                               0.0s 
 βœ” Container backend-core  Created                                                                                                               0.0s 
 βœ” Container worker        Created                                                                                                               0.0s 
 βœ” Container web           Created                                                                                                               0.0s 
 βœ” Container flower        Created                                                                                                               0.0s 
Attaching to backend-core, beat, flower, redis, web, worker
redis         | 1:C 22 Apr 2024 16:15:44.956 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis         | 1:C 22 Apr 2024 16:15:44.956 * Redis version=7.2.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis         | 1:C 22 Apr 2024 16:15:44.956 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis         | 1:M 22 Apr 2024 16:15:44.956 * monotonic clock: POSIX clock_gettime
redis         | 1:M 22 Apr 2024 16:15:44.957 * Running mode=standalone, port=6379.
redis         | 1:M 22 Apr 2024 16:15:44.959 * Server initialized
redis         | 1:M 22 Apr 2024 16:15:44.960 * Loading RDB produced by version 7.2.3
redis         | 1:M 22 Apr 2024 16:15:44.960 * RDB age 264 seconds
redis         | 1:M 22 Apr 2024 16:15:44.960 * RDB memory usage when created 1.40 Mb
redis         | 1:M 22 Apr 2024 16:15:44.960 * Done loading RDB, keys loaded: 3, keys expired: 0.
redis         | 1:M 22 Apr 2024 16:15:44.960 * DB loaded from disk: 0.000 seconds
redis         | 1:M 22 Apr 2024 16:15:44.960 * Ready to accept connections tcp
backend-core  | INFO:     Uvicorn running on http://0.0.0.0:5050 (Press CTRL+C to quit)
backend-core  | INFO:     Started parent process [1]

However, worker is still throwing an error

worker | raise mapped_exc(message) from exc
worker | httpx.ConnectError: [Errno -2] Name or service not known

This is because the default SUPABASE_URL defined in .env doesn't work out-of-the-box on Ubuntu. The simplest fix is to add an extra host to worker service of docker-compose.yml. For eg.

  worker:
    pull_policy: if_not_present
    image: stangirard/quivr-backend-prebuilt:latest
    env_file:
      - .env
    build:
      context: backend
      dockerfile: Dockerfile
    container_name: worker
    command: celery -A celery_worker worker -l info
    restart: always
    depends_on:
      - redis
    extra_hosts:
      - "host.docker.internal:host-gateway"

After restarting docker compose, my logs look successful.

backend-core  | INFO:     127.0.0.1:55152 - "GET /healthz HTTP/1.1" 200 OK
beat          | [2024-04-22 16:45:00,000: INFO/MainProcess] Scheduler: Sending due task process_integration_brain_sync (celery_worker.process_integration_brain_sync)
worker        | [2024-04-22 16:45:00,001: INFO/MainProcess] Task celery_worker.process_integration_brain_sync[36b7aa60-1d6c-44c1-b5a2-f79e1d23f79e] received
worker        | [2024-04-22 16:45:00,031: INFO/ForkPoolWorker-11] HTTP Request: GET http://host.docker.internal:54321/rest/v1/integrations_user?select=%2A%2C%20integrations%20%28%29&integrations.integration_name=eq.notion "HTTP/1.1 200 OK"
worker        | [2024-04-22 16:45:00,032: INFO/ForkPoolWorker-11] Task celery_worker.process_integration_brain_sync[36b7aa60-1d6c-44c1-b5a2-f79e1d23f79e] succeeded in 0.029550996841862798s: None

thanks @sweatybridge ! I'm currently having these issues :D

I'm writing a doc on how to deploy to ubuntu. Thanks for the tip

I've added the

extra_hosts:
      - "host.docker.internal:host-gateway"

To the docker-compose.yml

Hope it helps many people

Finishing up the documentation on Ubuntu.