jorge-menjivar / unsaged

Open source chat kit engineered for seamless interaction with AI models.

Home Page:https://unsaged.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] Section personalization not found

sebiweise opened this issue · comments

Describe the bug
When I login I got stuck to the loading screen "Loading Conversations..."
Console is showing the error "Section personalization not found"

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows
  • Browser chrome
  • Browser version latest

Local storage debug is just showing the nextauth.message: "{"event":"session","data":{"trigger":"getSession"},"timestamp":1699362676}"

Maybe related to setSavedSettings in home.tsx?
The import is made in line 10 but it is never used.

That error should actually just be a warning and not affect the app. It is basically checking to see if you have set a preference for light or dark theme and should default to dark if nothing is set.
Try setting NEXT_PUBLIC_DEBUG_MODE=true in the .env file and see the console to check if all the env variables are being read correctly, client and server side. Also, check the browser network logs, you should see an error there if it is not connecting to supabase.

Client side env vars looking good.

The request to /api/models result into an error:
[POST] /api/models reason=EDGE_FUNCTION_INVOCATION_TIMEOUT, status=504, user_error=true

We need to implement a timeout on our side to prevent the whole model fetching from crashing. I will make a note of that. In the meantime, I know you mentioned in another issue that you are using ollama. Could it be that it gets stuck trying to connect to the ollama server?

Yes I just checked that, so my Ollama server is not reachable, the fetch call is taking to long and therefor the api call to /api/models results in a timeout.
But restarting my ollama server and making sure it is available didn´t fix the timeout.

Yes so removing the ollama host variable "resolved" the problem. I think getting the ollama models isn´t working 100%

Does it work okay if you deploy locally?

I will test to run it on my local pc and connect to my external ollama server, maybe Vercel requires https to connect to external resources or maybe the default port of ollama 11434 isn´t allowed on Vercel infra.

Keep me updated. Thanks!

image
Locally it is working fine so I think there is some limitation on Vercel. I will try to configure a reverse proxy for my ollama server and try to get the api to https maybe this will help on Vercel.

Yes so Vercel isn´t working with the port 11434 for me, when I reverse proxy my ollama api to https://ollama.mydomain.com it is working fine on Vercel.