lhenault / simpleAI

An easy way to host your own AI API and expose alternative models, while being compatible with "open" AI clients.

Home Page:https://pypi.org/project/simple-ai-server/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

It doesn't work with https://github.com/Niek/chatgpt-web

cahya-wirawan opened this issue · comments

Hi,
I am trying to use the chatgpt client https://github.com/Niek/chatgpt-web with this simpleAI, but it seems
the http request from chatgpt-web starts with /v1/ where simpleAI starts without /v1/.
It would be great if this chatgpt web user interface could work with simpleAI.
Thanks

Good point. I've personally used it mostly through the openai client and editing openai.api_base works, but after a quick look to that specific project I understand it wouldn't work. Should be easy to add with something like:

from fastapi import FastAPI, APIRouter

app = FastAPI()
router = APIRouter(prefix="/v1")

app.include_router(router)

The issue if doing this is that openai client uses the /v1 as part of the api_base parameter. If we were implementing the previous suggestion it would break support for at least the official client.

I believe the simple, better, and cleaner solution would be to have the people working on the aforementioned repo to:

  1. edit the apiBase to add /v1
  2. and remove the /v1 from there

I assume their current implementation isn't working for openai API itself if used through Azure. You could also use SimpleAI through some nginx reverse proxy.

Actually, I used the nginx as reverse proxy before writing the issue, but I got cors error and somehow I saw that the chatgpt-web tried to use http OPTIONS protocol which was rejected by simpleai because it doesnt provide it

I'm far from being an expert with CORS but that seems a valid issue we should try to fix. Could you perhaps give a bit more details in a separate issue? Thanks :)

On the other hand, for the reasons I've given in my previous comment, I don't think we should add the /v1 prefix, so I'll close this one. Except if you have a different opinion there.

In my case I wanted to use with Obsidian and got CORS blocked. I copy and pasted __main__.py from simple_ai into my project as sai_server.py and modified the serve app function roughly as follows

def allow_obsidian_md_cors(app):
    origins = [
        "app://obsidian.md",
    ]
    app.add_middleware(
        CORSMiddleware,
        allow_origins=origins,
        allow_credentials=True,
        allow_methods=["*"],
        allow_headers=["*"],
    )
    return app

def serve_app(host="127.0.0.1", port=8080, **kwargs):
    from . import server

    app = allow_obsidian_md_cors(server.app)
    uvicorn.run(app=app, host=host, port=port)

^^ untested GPT output but gets the idea across. actual code not available on local atm

I think you could do the same pattern to add the router and/or fix the CORS then run with python sai_server.py serve -- probably best to keep out of the main server codebase though since there are exponential permutations of middleware and routing that could be implemented

http OPTIONS protocol

This I'm not sure about. can't find where it is called from in the source either but only used github search (which seems to have recently improve.

probably best to keep out of the main server codebase though since there are exponential permutations of middleware and routing that could be implemented

It's tempting to simply go the brute force way and add this, but indeed unsure about the implications of:

allow_methods=["*"],
allow_headers=["*"],

Ok thanks a lot for the prompt response, I will try it later after I am back home.

@Nintorac will probably steal your snippet to enrich the docs, thanks!

After I added

app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"]
)

The CORS issue is gone. And the OPTIONS request also works.

Thanks again @cahya-wirawan and @Nintorac for your feedbacks. I've added a section in the README to help anyone dealing with CORS / routing issues, I'm now closing this issue.

Actually I have now a correct nginx configuration that remove the need to add CORS allows origins as written above.
This is a better way since setting allow_origins to “*” is bad from security point of view.
I will write an example of this nginx config and short comments about it.

I think the advantage of the solution suggested by @Nintorac is that you can implement whatever policy is relevant to your use case. Their snippet doesn't allow_origins=["*"], which is indeed a quite bad idea from a security standpoint, but a list of allowed origin.

I'd be happy to have a look at your example but with the aforementioned solution you don't need a reverse proxy, only Python, nor to know anything about nginx. Which might be simpler and good enough for most use cases.

See this section in the README to solve your initial issue.

Thats right, it is simpler just to use allow_origins=["*"], but in a production environment
we will not put the simpleai web port directly in to the public. We will use any kind of reverse proxy, for example using nginx or apache.

This is my nginx configuration in a docker container with simpleai server (port 8080), chatgpt-web (port 5173) and nginx (port 9000) inside. Moreover, it solves not only the CORS issue, but it solves the issue with "/v1" path.


server {
  listen       9000;
  server_name  example.com;

  rewrite ^/(v1/)?models$ https://example.com/models/ redirect;

  location /v1/ {
    proxy_pass http://localhost:8080/;
    proxy_buffering off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
  }

  location ~ ^/(models|chat|completions|edits|embeddings)/ {
    proxy_pass http://localhost:8080/$1/;
    proxy_buffering off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
  }

  location /openapi.json {
    proxy_pass http://localhost:8080/openapi.json;
    proxy_buffering off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
  }

  location / {
    proxy_pass http://localhost:5173/;
    proxy_buffering off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
    proxy_set_header Host $host;
  }
}

And my Dockerfile if anyone interested:

FROM nikolaik/python-nodejs:latest

RUN apt-get update && apt-get install -y --no-install-recommends \
git \
nginx \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./default /etc/nginx/sites-available/default

RUN chown -R pn:pn /var/lib/nginx && \
    chown -R pn:pn /var/log/nginx && \
    chown -R pn:pn /etc/nginx/conf.d
RUN touch /run/nginx.pid && chown -R pn:pn /run/nginx.pid

EXPOSE 9000
USER pn
WORKDIR /home/pn/app

COPY ./run.sh /home/pn

CMD "/home/pn/run.sh"

where "default" is the nginx config above, and run.sh is as follow:

#!/bin/sh

nginx

cd /home/pn/app && pip install -e simpleAI/
/home/pn/.local/bin/simple_ai serve&

cd /home/pn/app/chatgpt-web && npm ci
npm run dev

Thanks for sharing this!