Zibbp / ganymede

Twitch VOD and Live Stream archiving platform. Includes a rendered and real-time chat for each archive.

Home Page:https://github.com/Zibbp/ganymede

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Application error on Settings Page

NothingTV opened this issue · comments

Hi there!
I just revived my ganymede instance and added the temporal container and right after that I can't access the settings page anymore, I get the error Application error: a client-side exception has occurred (see the browser console for more information).. In console I see two TypeError: eW is null messages. I don't see anything else, is there a way to debug this further?

Regards

Any logs in the API container when you access the settings page? If you restart the API container and watch the logs do you see any errors? and what is the build date that gets printed?

Absolutely no logs in the api container about this, it happened right after I added the temporal container. It also just created another task for a live channel which was offline for about 1 min (now with another title tho) and instead of using the old task or cancelling the old task, there are now two tasks for the same livestream.

I assume you're running the :latest tag for all the containers? Can you post a redacted docker-compose.yml file?

Mostly yes, here is my docker-compose.yml:

services:
  ganymede-api:
    container_name: ganymede-api
    image: ghcr.io/zibbp/ganymede:latest
    restart: unless-stopped
    depends_on:
      - ganymede-temporal
    environment:
      - TZ=Europe/Berlin
      - DB_HOST=ganymede-db
      - DB_PORT=5432
      - DB_USER=ganymede
      - DB_PASS=XXXX
      - DB_NAME=ganymede
      - DB_SSL=disable
      - JWT_SECRET=XXXX
      - JWT_REFRESH_SECRET=XXXX
      - TWITCH_CLIENT_ID=XXXX
      - TWITCH_CLIENT_SECRET=XXXX
      - FRONTEND_HOST=https://front-end.XXXX
      - COOKIE_DOMAIN=.XXXX
      - NUXT_PUBLIC_API_URL=https://api.XXXX
      - NUXT_PUBLIC_CDN_URL=https://cdn.XXXX
      # OPTIONAL
      # - OAUTH_PROVIDER_URL=
      # - OAUTH_CLIENT_ID=
      # - OAUTH_CLIENT_SECRET=
      # - OAUTH_REDIRECT_URL=https://XXXX/api/v1/auth/oauth/callback # Points to the API service
      - TEMPORAL_URL=ganymede-temporal:7233
      # WORKER
      - MAX_CHAT_DOWNLOAD_EXECUTIONS=5
      - MAX_CHAT_RENDER_EXECUTIONS=3
      - MAX_VIDEO_DOWNLOAD_EXECUTIONS=5
      - MAX_VIDEO_CONVERT_EXECUTIONS=3
    volumes:
      - /storage/vods:/vods
      - /root/ganymede/logs:/logs
      - /root/ganymede/data:/data
      - /tmp/vods:/tmp
    ports:
      - 4800:4000
  ganymede-frontend:
    container_name: ganymede-frontend
    image: ghcr.io/zibbp/ganymede-frontend:latest
    restart: unless-stopped
    environment:
      - API_URL=https://api.XXXX
      - CDN_URL=https://cdn.XXXX
      - SHOW_SSO_LOGIN_BUTTON=false
      - FORCE_SSO_AUTH=false
      - REQUIRE_LOGIN=false
    ports:
      - 4801:3000
  ganymede-temporal:
    image: temporalio/auto-setup:latest
    container_name: ganymede-temporal
    depends_on:
      - ganymede-db
    environment:
      - DB=postgresql
      - DB_PORT=5432
      - POSTGRES_USER=ganymede
      - POSTGRES_PWD=XXXX
      - POSTGRES_SEEDS=ganymede-db
    ports:
      - 7233:7233
  ganymede-temporal-ui:
    image: temporalio/ui:latest
    container_name: ganymede-temporal-ui
    depends_on:
      - ganymede-temporal
    environment:
      - TEMPORAL_ADDRESS=ganymede-temporal:7233
    ports:
      - 8233:8080
  ganymede-db:
    container_name: ganymede-db
    image: postgres:14
    volumes:
      - ./ganymede-db:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=XXXX
      - POSTGRES_USER=ganymede
      - POSTGRES_DB=ganymede
    ports:
      - 4803:5432
  ganymede-nginx:
    container_name: ganymede-nginx
    image: nginx
    volumes:
      - /root/ganymede/nginx.conf:/etc/nginx/nginx.conf:ro
      - /storage/vods:/mnt/vods
    ports:
      - 4802:8080

Looks right to me. If you bring everything down and temporarily rename the database directory (mv ganymede-db ganymede-db-bak), then bring everything back up (recreating the database), can you access the settings page then?

It also just created another task for a live channel which was offline for about 1 min (now with another title tho) and instead of using the old task or cancelling the old task, there are now two tasks for the same livestream.

Is the first task still downloading the video?

Sure, I will try that one, after the current stream is completely archived.

Is the first task still downloading the video?

Yep, but it seems like both tasks record the same stream at the same time, but that could be because of the issue before, right?

hmpf. The live stream couldn't get archived properly, I think my whole setup is broken. I just moved the db folder, created a new one and re-created the db container and now I simply get failed to load, I guess the instance simply broke after a migration and revival.
As quick solution: Is it possible to import the channels and the queue to a new instance? I ask for the queue because the latest recording is kinda important. I already saved it from the /tmp folder.
// EDIT: nvm, the vod is incompleted, I just re-create the whole instance
// EDIT2: removed
// EDIT3: with a complete fresh install I still get weird errors. I just changed my username and password and I get again errors in the settings page, but this time, there are logs for temporal and db, here are both:

shared external due to it's length: https://hastebin.skyra.pw/ezugegibed.swift

When you setup the fresh install, did you rename/remove the config file in data/config.json? The API container is having an error parsing the file. If not, I would try renaming/moving that file and having the API container recreate it.

I'm not sure what fixed it now, but I changed several things in the config and docker network and now it seems to work again with a fresh install, I can now even see the dates instead of "invalid date" :D
Since this seemed to be a "me"-Issue, I close it now, thank you for your help!