Zibbp / ganymede

Twitch VOD and Live Stream archiving platform. Includes a rendered and real-time chat for each archive.

Home Page:https://github.com/Zibbp/ganymede

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Version 2 Releasing Soon

Zibbp opened this issue · comments

I'm working on a version 2 release and hope to have it out by Christmas🎄.

Features

New Queue System

I'm overhauling the queue system to use Temporal workflows. This allows me to easily create tasks and put them in workflows for an easier and more robust queue system. This new system includes automated retries and hopefully less one-off queue issues. Using Temporal also introduces the idea of distributed worker nodes so if you have multiple systems you can balance load between them. Distributed workers will not be available in the next release, maybe in the future.

The v2 release will include some changes to the docker-compose file to accommodate the Temporal server that will need to run. To not break existing installs, I'm planning on running an ephemeral temporal server in the main API container until users can update their compose file. This ephemeral server will not persist data between restarts of the API container so I advice everyone to update their compose file with the proper service (available when v2 is released).

Frontend Updates

I gave the frontend a new coating of paint for the dark theme. I've also updated all of the packages to run on the latest versions and fixed any bugs that came with that.

Other Features

Chapters/Categories

I'm hoping to get #317 in this release utilizing the new queue system

Planned features

Thumbnails

In a future release I'm planning on adding support to generate video thumbnails, similar to when scrubbing a Youtube video, using FFmpeg. Generating the thumbnails is slow and resource intensive so it will be opt-in. Initially it will probably be manual, requiring users to click a button to generate thumbnails for a video to test the functionality out.
firefox_RTVYGp2E4A

Very exciting and a lovely Christmas surprise! will be interesting to see how it runs on my Orange Pi 5 plus and Raspberry pi 5 that arrived this week. I imagine performance will likely be the same because it relies on other project tools for a lot of the functionality. I guess that's a good excuse to create a fresh instance, that being said, I have seemed to struggle getting the chats to download on the RPI5 but not the OPI5+ which is odd (I've tinkered a lot to try get it working).

I imagine this has already been requested before, but any chance we could get progress bars or at least logs for the video move? Less of an issue when using an nvme drive, but when I've ran ganymede on slow boards via micro SD cards, it can be a long process with the larger vods.

Thanks for your work dude, and Merry Christmas!

Very exciting and a lovely Christmas surprise! will be interesting to see how it runs on my Orange Pi 5 plus and Raspberry pi 5 that arrived this week. I imagine performance will likely be the same because it relies on other project tools for a lot of the functionality. I guess that's a good excuse to create a fresh instance, that being said, I have seemed to struggle getting the chats to download on the RPI5 but not the OPI5+ which is odd (I've tinkered a lot to try get it working).

I imagine this has already been requested before, but any chance we could get progress bars or at least logs for the video move? Less of an issue when using an nvme drive, but when I've ran ganymede on slow boards via micro SD cards, it can be a long process with the larger vods.

Thanks for your work dude, and Merry Christmas!

Further update to the issues I mentioned on my raspberry pi 5.. So it seems to be fine in Ubuntu, but not Raspberry pi OS, I was scratching my head, reinstalled the OS multiple times, toyed with the compose file for hours (replicated 3 different working yaml's from other machines I've tested etc). Very weird indeed, I wonder if either it's a temporary bug in raspberry pi OS that will get patched later down the line, or if there is some dependency missing from a clean install.

Strange that it's not working on Raspberry PI OS. Is that OS 32bit? I know you can install Ubuntu on the newer PIs using 64 bit, that maybe that is why it's working? Any errors in the logs? You should be able to exec into the container docker exec -it ganymede-api /bin/bash and run the chat download manually to debug TwitchDownloaderCLI chatdownload --id <id> --embed-images -o /tmp/debug.json

I imagine this has already been requested before, but any chance we could get progress bars or at least logs for the video move? Less of an issue when using an nvme drive, but when I've ran ganymede on slow boards via micro SD cards, it can be a long process with the larger vods.

I'm surprised downloads work fine on the SD card, that must be slow! I'll look into implementing a progress bar and transfer speed rate on the queue page.

Strange that it's not working on Raspberry PI OS. Is that OS 32bit? I know you can install Ubuntu on the newer PIs using 64 bit, that maybe that is why it's working? Any errors in the logs? You should be able to exec into the container docker exec -it ganymede-api /bin/bash and run the chat download manually to debug TwitchDownloaderCLI chatdownload --id <id> --embed-images -o /tmp/debug.json

I imagine this has already been requested before, but any chance we could get progress bars or at least logs for the video move? Less of an issue when using an nvme drive, but when I've ran ganymede on slow boards via micro SD cards, it can be a long process with the larger vods.

I'm surprised downloads work fine on the SD card, that must be slow! I'll look into implementing a progress bar and transfer speed rate on the queue page.

Yeah, very odd indeed! 64bit Rasberry pi OS, i tried the server image, recommended and full version with all the same result that it would error out with the chat download but the video would process fine. I would of suspected it might be as RPOS is Debian based instead, but I have it running fine on other SBC's which are running dietpi which is also debian based as far as I'm aware. I will say that all of these instances have been deployed via docker, would be interesting to see how it responds to a local deployments.

They can be a tad slow via an SD card that's for sure! At one point i was running it on an Orange Pi Zero 3, a 10 hour VOD could take 22 hours to render the chat! It was fine for my use case at the time of just archiving for a friend that wanted their streams backed up. Pretty impressive that a $15 SBC can manage it at all not gonna lie.

I did want to ask, is there a way to allow Ganymede to utilise more resources? I understand a fresh out of the box deployment can run two jobs, but it would be nice to let it rip through a single job quickly when resources allow for it. On my orange pi 5 plus it can only end up using between 35-45% when of the CPU running from an NVME (a tad of a performance boost from the SD on the OPIZ3, a 2 hour vod takes about 45-50 mins). Im not sure if this is how Ganymede is coded to work or if there is some fiddling required with my docker and that a local deployment would use all the resources possible.

One last suggestion as i have been rambling, is it worth having a user submitted performance spreadsheet where people can submit what render speeds people manage on different hardware? I just have a hunch people might be curios what experience to expect from different hardware and it could guide their choice if they are buying something dedicated to this task (a twitch friend saw my instance and thought it was super cool and i had to do a bit of manual benchmarking for them to make their decision). Could be a google docs spreadsheet on the main page.

Let me know if you find out any more regarding the RPI OS issue. I can help out if you're able to manually run the chat download.

I did want to ask, is there a way to allow Ganymede to utilise more resources? I understand a fresh out of the box deployment can run two jobs, but it would be nice to let it rip through a single job quickly when resources allow for it. On my orange pi 5 plus it can only end up using between 35-45% when of the CPU running from an NVME (a tad of a performance boost from the SD on the OPIZ3, a 2 hour vod takes about 45-50 mins). Im not sure if this is how Ganymede is coded to work or if there is some fiddling required with my docker and that a local deployment would use all the resources possible.

I've tried to get the chat render to go faster but had no luck. I've even tried using hardware accelerating while encoding and could not get it to go any faster. You may want to post an issue in the upstream repository https://github.com/lay295/TwitchDownloader about if the ffmpeg process can be any faster (consume more resources if possible).

One last suggestion as i have been rambling, is it worth having a user submitted performance spreadsheet where people can submit what render speeds people manage on different hardware? I just have a hunch people might be curios what experience to expect from different hardware and it could guide their choice if they are buying something dedicated to this task (a twitch friend saw my instance and thought it was super cool and i had to do a bit of manual benchmarking for them to make their decision). Could be a google docs spreadsheet on the main page.

Sounds great, feel free to post an issue or discussion to get it started and I can pin it for others to see and contribute.

Let me know if you find out any more regarding the RPI OS issue. I can help out if you're able to manually run the chat download.

I did want to ask, is there a way to allow Ganymede to utilise more resources? I understand a fresh out of the box deployment can run two jobs, but it would be nice to let it rip through a single job quickly when resources allow for it. On my orange pi 5 plus it can only end up using between 35-45% when of the CPU running from an NVME (a tad of a performance boost from the SD on the OPIZ3, a 2 hour vod takes about 45-50 mins). Im not sure if this is how Ganymede is coded to work or if there is some fiddling required with my docker and that a local deployment would use all the resources possible.

I've tried to get the chat render to go faster but had no luck. I've even tried using hardware accelerating while encoding and could not get it to go any faster. You may want to post an issue in the upstream repository https://github.com/lay295/TwitchDownloader about if the ffmpeg process can be any faster (consume more resources if possible).

One last suggestion as i have been rambling, is it worth having a user submitted performance spreadsheet where people can submit what render speeds people manage on different hardware? I just have a hunch people might be curios what experience to expect from different hardware and it could guide their choice if they are buying something dedicated to this task (a twitch friend saw my instance and thought it was super cool and i had to do a bit of manual benchmarking for them to make their decision). Could be a google docs spreadsheet on the main page.

Sounds great, feel free to post an issue or discussion to get it started and I can pin it for others to see and contribute.

Cheers dude, I'll let you know once I get a chance to do more testing. I've actually got the last RPI OS on the first SSD that I tested with lying about, so I will just need to dig it out and swap the drive/complete a manual install.

Interesting that you tried to fiddle about with it and didn't manage to speed it up yourself, presume that was a local deployment to? I'll try get the testing done first and then potentially I'll look into posting an issue in the upstream repository.

Once I've got some benchmarks complete I'll go about it! Hopefully, I can find a non sub walled VOD with a nicely timed length to use (such as 1, 2 or 4 hours and not 3:23:26 for example).

Version 2 has been released. If you encounter any issues please open a new issue or post here.

Hi @Zibbp,

Thanks for the new release. I tried to upgrade to it but I'm facing an issue with the config file.
There is an error "While parsing config: unexpected end of JSON input". At first I thought it was my config, so I deleted it and I restarted the api container to let it generate a new one. But it still says the same thing again and again:

{"level":"info","time":"2023-12-25T17:54:28+01:00","message":"config file found at /data/config.json, loading"}
Version    : 
Git Hash   : 35cad5d66e771e6c79d38cf13757f4c28aa7b78a
Build Time : 2023-12-24T23:33:48Z
{"level":"info","time":"2023-12-25T17:54:28+01:00","message":"config file found at /data/config.json, loading"}
{"level":"debug","time":"2023-12-25T17:54:28+01:00","message":"config file loaded: /data/config.json"}
{"level":"debug","time":"2023-12-25T17:54:28+01:00","message":"authenticating with twitch"}
{"level":"debug","time":"2023-12-25T17:54:28+01:00","message":"config file loaded: /data/config.json"}
{"level":"panic","error":"While parsing config: unexpected end of JSON input","time":"2023-12-25T17:54:28+01:00","message":"error reading config file"}
panic: error reading config file

goroutine 1 [running]:
github.com/rs/zerolog/log.Panic.(*Logger).Panic.func1({0x188ddb0?, 0x0?})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/log.go:396 +0x27
github.com/rs/zerolog.(*Event).msg(0xc0001c44d0, {0x188ddb0, 0x19})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/event.go:158 +0x2c2
github.com/rs/zerolog.(*Event).Msg(...)
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/event.go:110
github.com/zibbp/ganymede/internal/config.NewConfig()
	/app/internal/config/config.go:148 +0x8ca
main.Run()
	/app/cmd/server/main.go:65 +0x25
main.main()
	/app/cmd/server/main.go:124 +0x4d4
{"level":"info","time":"2023-12-25T17:54:29+01:00","message":"authenticated with twitch"}
{"level":"debug","time":"2023-12-25T17:54:29+01:00","message":"setting up database connection"}

After this, it shows a few logs about workers but nothing more and the container is not ready.
Any idea of what I could do to fix it?

Does the JSON file look normal? Are the permissions to the data/config.json file and directory correct so that the container and read and write it? If the API container is running a PUID=1000 and PGID=1000 then the data directory and config.json need to be owned by the 1000 user (you).

The JSON file looks normal. I generated it multiple times and the error always comes back.
The data directory and the .json file both have the correct permissions (1000:1000 with rw-r-r), container have PUID=1000 and PGID=1000.

Current config.json that fails:

{
  "active_queue_items": 2,
  "archive": {
    "save_as_hls": false
  },
  "db_seeded": false,
  "debug": false,
  "live_check_interval_seconds": 300,
  "livestream": {
    "proxies": [
      {
        "header": "",
        "url": "https://eu.luminous.dev"
      },
      {
        "header": "x-donate-to:https://ttv.lol/donate",
        "url": "https://api.ttv.lol"
      }
    ],
    "proxy_enabled": false,
    "proxy_parameters": "%3Fplayer%3Dtwitchweb%26type%3Dany%26allow_source%3Dtrue%26allow_audio_only%3Dtrue%26allow_spectre%3Dfalse%26fast_bread%3Dtrue",
    "proxy_whitelist": [
      ""
    ]
  },
  "notifications": {
    "error_enabled": true,
    "error_template": "⚠️ Error: Queue ID {{queue_id}} for {{channel_display_name}} failed at task {{failed_task}}.",
    "error_webhook_url": "",
    "is_live_enabled": true,
    "is_live_template": "🔴 {{channel_display_name}} is live!",
    "is_live_webhook_url": "",
    "live_success_enabled": true,
    "live_success_template": "✅ Live Stream Archived: {{vod_title}} by {{channel_display_name}}.",
    "live_success_webhook_url": "",
    "video_success_enabled": true,
    "video_success_template": "✅ Video Archived: {{vod_title}} by {{channel_display_name}}.",
    "video_success_webhook_url": ""
  },
  "oauth_enabled": false,
  "parameters": {
    "chat_render": "-h 1440 -w 340 --framerate 30 --font Inter --font-size 13",
    "streamlink_live": "--twitch-low-latency,--twitch-disable-hosting",
    "twitch_token": "",
    "video_convert": "-c:v copy -c:a copy"
  },
  "registration_enabled": true,
  "storage_templates": {
    "file_template": "{{id}}",
    "folder_template": "{{date}}-{{id}}-{{type}}-{{uuid}}"
  },
  "video_check_interval_minutes": 180
}

Hmm I was able to start the container with the provided JSON config. Can you add this to your API service in the docker-compose.yml file and bring the API container back up? It should print out what the container sees as the config.

entrypoint: "cat /data/config.json"
      - TEMPORAL_URL=temporal:7233
    volumes:
      - ./data:/data
      - ./logs:/logs
      - ./vods:/vods
    ports:
      - 4800:4000
+   entrypoint: "cat /data/config.json"

As the JSON is valid, I'm thinking that it's trying to read an empty file.

I use Kubernetes, so I used command & args instead. Here's what I got from inside the API container logs:

{
  "active_queue_items": 2,
  "archive": {
    "save_as_hls": false
  },
  "db_seeded": false,
  "debug": false,
  "live_check_interval_seconds": 300,
  "livestream": {
    "proxies": [
      {
        "header": "",
        "url": "https://eu.luminous.dev"
      },
      {
        "header": "x-donate-to:https://ttv.lol/donate",
        "url": "https://api.ttv.lol"
      }
    ],
    "proxy_enabled": false,
    "proxy_parameters": "%3Fplayer%3Dtwitchweb%26type%3Dany%26allow_source%3Dtrue%26allow_audio_only%3Dtrue%26allow_spectre%3Dfalse%26fast_bread%3Dtrue",
    "proxy_whitelist": [
      ""
    ]
  },
  "notifications": {
    "error_enabled": true,
    "error_template": "⚠️ Error: Queue ID {{queue_id}} for {{channel_display_name}} failed at task {{failed_task}}.",
    "error_webhook_url": "",
    "is_live_enabled": true,
    "is_live_template": "🔴 {{channel_display_name}} is live!",
    "is_live_webhook_url": "",
    "live_success_enabled": true,
    "live_success_template": "✅ Live Stream Archived: {{vod_title}} by {{channel_display_name}}.",
    "live_success_webhook_url": "",
    "video_success_enabled": true,
    "video_success_template": "✅ Video Archived: {{vod_title}} by {{channel_display_name}}.",
    "video_success_webhook_url": ""
  },
  "oauth_enabled": false,
  "parameters": {
    "chat_render": "-h 1440 -w 340 --framerate 30 --font Inter --font-size 13",
    "streamlink_live": "--twitch-low-latency,--twitch-disable-hosting",
    "twitch_token": "",
    "video_convert": "-c:v copy -c:a copy"
  },
  "registration_enabled": true,
  "storage_templates": {
    "file_template": "{{id}}",
    "folder_template": "{{date}}-{{id}}-{{type}}-{{uuid}}"
  },
  "video_check_interval_minutes": 180
}

I went back to v1.4.3 only for the API server to see if I could get the same thing. I got an error related to the database:

{"level":"fatal","error":"error creating user: ent: constraint failed: pq: duplicate key value violates unique constraint \"users_username_key\"","time":"2023-12-25T19:14:44+01:00","message":"error seeding database"}

Turned "db_seeded" from false to true in the config, restarted the container and it was up and running.
Reverted back to v2.0.0 for the API server, same config error as before even with the working configuration from 1.4.3.

I'm starting to think it may be related to the package used for the config. Still not sure why it's only effecting you. I've created a new branch that drops the version to an older version. This is slightly newer than the one running in v1.4.3 so not 100% if it will work.

Can you pull #333 build the image locally and give it a try?

Can you pull #333 build the image locally and give it a try?

Just tried with the image I built and unfortunately I still face the same config error.
I noticed something when doing more tests. It replaces the config file every time I start the container. For example, if I set db_seeded to true and if I change some other values without breaking the json format, the whole config is replaced by a new one when the container is started. The logs says it loaded the config multiple times but never said it created a new one, but it was still replaced.

Can you share your Kubernetes manifests? It must be something in there then

Can you share your Kubernetes manifests?

Here is my manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: ganymede-app-api
  name: ganymede-app-api-deployment
  namespace: ganymede
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ganymede-app-api
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: ganymede-app-api
    spec:
      containers:
        - name: ganymede-api
          env:
            - name: PUID
              value: "1000"
            - name: PGID
              value: "1000"
            - name: DB_HOST
              value: ganymede-db-svc
            - name: DB_NAME
              value: ganymede
            - name: DB_PASS
              valueFrom:
                secretKeyRef:
                  name: ganymede-app-secret
                  key: DB_PASS
            - name: DB_PORT
              value: "5432"
            - name: DB_SSL
              value: disable
            - name: DB_USER
              value: ganymede
            - name: FRONTEND_HOST
              value: https://vods.domain.com
            - name: JWT_REFRESH_SECRET
              valueFrom:
                secretKeyRef:
                  name: ganymede-app-secret
                  key: JWT_REFRESH_SECRET
            - name: JWT_SECRET
              valueFrom:
                secretKeyRef:
                  name: ganymede-app-secret
                  key: JWT_SECRET
            - name: TWITCH_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: ganymede-app-secret
                  key: TWITCH_CLIENT_ID
            - name: TWITCH_CLIENT_SECRET
              valueFrom:
                secretKeyRef:
                  name: ganymede-app-secret
                  key: TWITCH_CLIENT_SECRET
            - name: TEMPORAL_URL
              value: "ganymede-temporal-svc:7233"
            - name: MAX_CHAT_DOWNLOAD_EXECUTIONS
              value: "5"
            - name: MAX_CHAT_RENDER_EXECUTIONS
              value: "3"
            - name: MAX_VIDEO_DOWNLOAD_EXECUTIONS
              value: "5"
            - name: MAX_VIDEO_CONVERT_EXECUTIONS
              value: "3"
            - name: TZ
              value: Europe/Amsterdam
          resources: {}
          image: ghcr.io/zibbp/ganymede:v2.0.0
          ports:
            - containerPort: 4000
          volumeMounts:
            - mountPath: /vods
              name: nfs-ganymede-app
              subPath: vods
            - mountPath: /logs
              name: nfs-ganymede-app
              subPath: logs
            - mountPath: /data
              name: nfs-ganymede-app
              subPath: data
            - mountPath: /tmp
              name: vods-tmp-dir
      volumes:
        - name: nfs-ganymede-app
          persistentVolumeClaim:
            claimName: nfs-ganymede-app
        - name: vods-tmp-dir
          emptyDir: {}

I use a nfs mount that keeps the data. The config file is also there and is always mounted correctly in the container (the data is also saved when I shutdown the container as is).

I've been able to start the API server by going manually in the container, turning db_seeded to true in the config and then starting the API with /opt/app/ganymede-api. It then works and I can access my vods from the frontend.

{"level":"info","time":"2023-12-26T20:25:38+01:00","message":"config file found at /data/config.json, loading"}
{"level":"debug","time":"2023-12-26T20:25:38+01:00","message":"config file loaded: /data/config.json"}
2023/12/26 20:25:38 INFO  No logger configured for temporal client. Created default one.
{"level":"info","time":"2023-12-26T20:25:38+01:00","message":"Connected to temporal at ganymede-temporal-svc:7233"}

   ____    __
  / __/___/ /  ___
 / _// __/ _ \/ _ \
/___/\__/_//_/\___/ v4.11.3
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
                                    O\
⇨ http server started on [::]:4000
{"level":"info","time":"2023-12-26T20:25:39+01:00","message":"authenticated with twitch"}
{"level":"info","time":"2023-12-26T20:25:43+01:00","message":"setting up check watched channel videos schedule"}
{"level":"info","time":"2023-12-26T20:25:43+01:00","message":"running check watched channel videos schedule"}

I'm not sure it's coming from my setup. The issue seems to happen when it initialize, it loads the config but fails and it also replace the config file even if it loads it.

I've been finally able to fix my issue. I commented out this part of the code to stop the refreshConfig function to do anything to my configuration (that was already newly created anyway). Now I can start my container 100% of the time. I went back to v2.0.0 to make sure that was the real fix and not something else and I got the config error again. Switched back to my image and the error was gone.

} else {
		log.Info().Msgf("config file found at %s, loading", configPath)
		err := viper.ReadInConfig()
		// Rewrite config file to apply new variables and remove old values
+		//refreshConfig(configPath)
		log.Debug().Msgf("config file loaded: %s", viper.ConfigFileUsed())
		if err != nil {
			log.Panic().Err(err).Msg("error reading config file")
		}
	}

quick question, I had a queue built up since it got stuck while I was sick. After upgrading I still see the queue of 70 some items but I don't see any workflows running. Is there a way to import the old queue to have it process these with the new temporal workflow?

quick question, I had a queue built up since it got stuck while I was sick. After upgrading I still see the queue of 70 some items but I don't see any workflows running. Is there a way to import the old queue to have it process these with the new temporal workflow?

Unfortunately not, you will need to re-archive the VODs to get them to process using the new system. If you have the list of IDs you can create a simple bash loop to archive the vods.

curl --request POST \
  --url http://IP:4800/api/v1/archive/vod \
  --header 'Content-Type: application/json' \
  --cookie 'access-token=<ACCESS TOKEN COOKIE VALUE>' \
  --data '{
	"vod_id": "twitch_vod_id",
	"quality": "best",
	"chat": true,
	"render_chat": true
}'

I've been finally able to fix my issue. I commented out this part of the code to stop the refreshConfig function to do anything to my configuration (that was already newly created anyway). Now I can start my container 100% of the time. I went back to v2.0.0 to make sure that was the real fix and not something else and I got the config error again. Switched back to my image and the error was gone.

Not sure how commenting out the refreshConfig allows it to properly boot. The original error stated it had issues loading the config which runs before the refreshConfig code. As this is all over NFS maybe there's a weird issue there? I've added some retry logic to the config just to see if it works if you want to build an image from #334.

Not sure how commenting out the refreshConfig allows it to properly boot. The original error stated it had issues loading the config which runs before the refreshConfig code.

For some reason, if you check the logs I provided in my first post, it loads the config 2 times based on the "config file loaded" log. In the code, the debug message is placed after the function and since my config was being edited all the time the container started, I commented it to see if it would change anything. Not sure what's running it 2 times but commenting the function did fix my issue.

I've added some retry logic to the config just to see if it works if you want to build an image from #334.

It works for me:

{"level":"info","time":"2023-12-27T09:50:49+01:00","message":"Starting worker with config: {MAX_CHAT_DOWNLOAD_EXECUTIONS:3 MAX_CHAT_RENDER_EXECUTIONS:3 MAX_VIDEO_DOWNLOAD_EXECUTIONS:3 MAX_VIDEO_CONVERT_EXECUTIONS:3 TEMPORAL_URL:ganymede-temporal-svc:7233}"}
{"level":"info","time":"2023-12-27T09:50:49+01:00","message":"config file found at /data/config.json, loading"}
{"level":"info","time":"2023-12-27T09:50:49+01:00","message":"config file loaded: /data/config.json"}
Version    : 
Git Hash   : b7cf72469b4df6b71401fe6b9ff3b4d61f993840
Build Time : 2023-12-27T08:38:32Z
{"level":"info","time":"2023-12-27T09:50:49+01:00","message":"config file found at /data/config.json, loading"}
{"level":"error","error":"While parsing config: unexpected end of JSON input","time":"2023-12-27T09:50:49+01:00","message":"error loading config (attempt 1/10)"}
{"level":"info","time":"2023-12-27T09:50:49+01:00","message":"retrying in 1 second"}
{"level":"debug","time":"2023-12-27T09:50:49+01:00","message":"config file loaded: /data/config.json"}
{"level":"debug","time":"2023-12-27T09:50:49+01:00","message":"authenticating with twitch"}
{"level":"info","time":"2023-12-27T09:50:50+01:00","message":"config file loaded: /data/config.json"}
{"level":"info","time":"2023-12-27T09:50:50+01:00","message":"authenticated with twitch"}
{"level":"debug","time":"2023-12-27T09:50:50+01:00","message":"setting up database connection"}
{"level":"info","Namespace":"default","TaskQueue":"archive","WorkerID":"ganymede-app-api-deploy-5557cf8644-2gf62","time":"2023-12-27T09:50:50+01:00","message":"Started Worker"}
{"level":"debug","time":"2023-12-27T09:50:50+01:00","message":"config file loaded: /data/config.json"}
2023/12/27 09:50:50 INFO  No logger configured for temporal client. Created default one.
{"level":"info","time":"2023-12-27T09:50:50+01:00","message":"Connected to temporal at ganymede-temporal-svc:7233"}

   ____    __
  / __/___/ /  ___
 / _// __/ _ \/ _ \
/___/\__/_//_/\___/ v4.11.3
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
                                    O\
⇨ http server started on [::]:4000
{"level":"info","Namespace":"default","TaskQueue":"chat-download","WorkerID":"ganymede-app-api-deploy-5557cf8644-2gf62","time":"2023-12-27T09:50:50+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"chat-render","WorkerID":"ganymede-app-api-deploy-5557cf8644-2gf62","time":"2023-12-27T09:50:50+01:00","message":"Started Worker"}
{"level":"info","time":"2023-12-27T09:50:51+01:00","message":"authenticated with twitch"}
{"level":"info","Namespace":"default","TaskQueue":"video-download","WorkerID":"ganymede-app-api-deploy-5557cf8644-2gf62","time":"2023-12-27T09:50:51+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"video-convert","WorkerID":"ganymede-app-api-deploy-5557cf8644-2gf62","time":"2023-12-27T09:50:51+01:00","message":"Started Worker"}
{"level":"info","time":"2023-12-27T09:50:55+01:00","message":"setting up check watched channel videos schedule"}
{"level":"info","time":"2023-12-27T09:50:55+01:00","message":"running check watched channel videos schedule"}

It does error out (every single time I start the container), but the new attempt allows the API server to start. Timing issue maybe?
The same logic need to be used for the config creation. I tested without having a config.json file and I got the same error (it creates it even if it says it can't find it):

{"level":"info","time":"2023-12-27T10:15:19+01:00","message":"Starting worker with config: {MAX_CHAT_DOWNLOAD_EXECUTIONS:3 MAX_CHAT_RENDER_EXECUTIONS:3 MAX_VIDEO_DOWNLOAD_EXECUTIONS:3 MAX_VIDEO_CONVERT_EXECUTIONS:3 TEMPORAL_URL:ganymede-temporal-svc:7233}"}
{"level":"info","time":"2023-12-27T10:15:19+01:00","message":"config file not found at /data/config.json, creating new one"}
Version    : 
Git Hash   : b7cf72469b4df6b71401fe6b9ff3b4d61f993840
Build Time : 2023-12-27T08:38:32Z
{"level":"info","time":"2023-12-27T10:15:19+01:00","message":"config file not found at /data/config.json, creating new one"}
{"level":"panic","error":"open /data/config.json: file exists","time":"2023-12-27T10:15:19+01:00","message":"error creating config file"}
panic: error creating config file

goroutine 1 [running]:
github.com/rs/zerolog/log.Panic.(*Logger).Panic.func1({0x188ff21?, 0x0?})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/log.go:396 +0x27
github.com/rs/zerolog.(*Event).msg(0xc00031f0a0, {0x188ff21, 0x1a})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/event.go:158 +0x2c2
github.com/rs/zerolog.(*Event).Msg(...)
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/event.go:110
github.com/zibbp/ganymede/internal/config.NewConfig()
	/app/internal/config/config.go:140 +0x81e
main.Run()
	/app/cmd/server/main.go:65 +0x25
main.main()
	/app/cmd/server/main.go:124 +0x4d4
{"level":"debug","time":"2023-12-27T10:15:19+01:00","message":"authenticating with twitch"}
{"level":"info","time":"2023-12-27T10:15:20+01:00","message":"authenticated with twitch"}
{"level":"debug","time":"2023-12-27T10:15:20+01:00","message":"setting up database connection"}

Restarting the container make it works, but I get the database error instead (I have to set db_seeded to true manually):

{"level":"info","time":"2023-12-27T10:16:12+01:00","message":"Starting worker with config: {MAX_CHAT_DOWNLOAD_EXECUTIONS:3 MAX_CHAT_RENDER_EXECUTIONS:3 MAX_VIDEO_DOWNLOAD_EXECUTIONS:3 MAX_VIDEO_CONVERT_EXECUTIONS:3 TEMPORAL_URL:ganymede-temporal-svc:7233}"}
{"level":"info","time":"2023-12-27T10:16:12+01:00","message":"config file found at /data/config.json, loading"}
{"level":"info","time":"2023-12-27T10:16:12+01:00","message":"config file loaded: /data/config.json"}
Version    : 
Git Hash   : b7cf72469b4df6b71401fe6b9ff3b4d61f993840
Build Time : 2023-12-27T08:38:32Z
{"level":"info","time":"2023-12-27T10:16:12+01:00","message":"config file found at /data/config.json, loading"}
{"level":"error","error":"While parsing config: unexpected end of JSON input","time":"2023-12-27T10:16:12+01:00","message":"error loading config (attempt 1/10)"}
{"level":"info","time":"2023-12-27T10:16:12+01:00","message":"retrying in 1 second"}
{"level":"debug","time":"2023-12-27T10:16:12+01:00","message":"config file loaded: /data/config.json"}
{"level":"debug","time":"2023-12-27T10:16:12+01:00","message":"authenticating with twitch"}
{"level":"info","time":"2023-12-27T10:16:13+01:00","message":"config file loaded: /data/config.json"}
{"level":"debug","time":"2023-12-27T10:16:13+01:00","message":"config file loaded: /data/config.json"}
{"level":"info","time":"2023-12-27T10:16:13+01:00","message":"authenticated with twitch"}
{"level":"debug","time":"2023-12-27T10:16:13+01:00","message":"setting up database connection"}
{"level":"info","Namespace":"default","TaskQueue":"chat-download","WorkerID":"ganymede-app-api-deploy-7b745c88b5-vnp9k","time":"2023-12-27T10:16:13+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"chat-render","WorkerID":"ganymede-app-api-deploy-7b745c88b5-vnp9k","time":"2023-12-27T10:16:14+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"video-download","WorkerID":"ganymede-app-api-deploy-7b745c88b5-vnp9k","time":"2023-12-27T10:16:14+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"video-convert","WorkerID":"ganymede-app-api-deploy-7b745c88b5-vnp9k","time":"2023-12-27T10:16:14+01:00","message":"Started Worker"}
{"level":"fatal","error":"error creating user: ent: constraint failed: pq: duplicate key value violates unique constraint \"users_username_key\"","time":"2023-12-27T10:16:14+01:00","message":"error seeding database"}
{"level":"info","Namespace":"default","TaskQueue":"archive","WorkerID":"ganymede-app-api-deploy-7b745c88b5-vnp9k","time":"2023-12-27T10:16:14+01:00","message":"Started Worker"}

The config being loaded twice is expected. The second is the worker process loading the config. I'm assuming this is causing some issues as the API and worker are trying to load and perform modifications to the config file at the same time. I'm working on an update to the branch now.

I've pushed some more changes to #334 if you want to give it another try. I've added a delay starting the worker process to hopefully resolve the config file conflicts. I've also updated the worker to not refresh the config which should fix the db_seeded being reset.

I've pushed some more changes to #334 if you want to give it another try. I've added a delay starting the worker process to hopefully resolve the config file conflicts. I've also updated the worker to not refresh the config which should fix the db_seeded being reset.

The config error is completely gone now, that's good. Tested both with and without config and it works. The only thing remaining is the db error when creating a new config. It should set db_seeded to true if it detects that the user is already present in the database. It would fix the other github issue I've seen too.

Version    : 
Git Hash   : 29e8ded192b024d87fb5fead69921ce60d672388
Build Time : 2023-12-27T17:24:17Z
{"level":"info","time":"2023-12-27T18:40:04+01:00","message":"config file not found at /data/config.json, creating new one"}
{"level":"info","time":"2023-12-27T18:40:05+01:00","message":"config file created"}
{"level":"fatal","error":"error creating user: ent: constraint failed: pq: duplicate key value violates unique constraint \"users_username_key\"","time":"2023-12-27T18:40:05+01:00","message":"error seeding database"}
{"level":"info","time":"2023-12-27T18:40:09+01:00","message":"Starting worker with config: {MAX_CHAT_DOWNLOAD_EXECUTIONS:3 MAX_CHAT_RENDER_EXECUTIONS:3 MAX_VIDEO_DOWNLOAD_EXECUTIONS:3 MAX_VIDEO_CONVERT_EXECUTIONS:3 TEMPORAL_URL:ganymede-temporal-svc:7233}"}
{"level":"info","time":"2023-12-27T18:40:09+01:00","message":"config file found at /data/config.json, loading"}
{"level":"info","time":"2023-12-27T18:40:09+01:00","message":"config file loaded: /data/config.json"}
{"level":"debug","time":"2023-12-27T18:40:09+01:00","message":"config file loaded: /data/config.json"}
{"level":"debug","time":"2023-12-27T18:40:09+01:00","message":"authenticating with twitch"}
{"level":"info","time":"2023-12-27T18:40:10+01:00","message":"authenticated with twitch"}
{"level":"debug","time":"2023-12-27T18:40:10+01:00","message":"setting up database connection"}
{"level":"info","Namespace":"default","TaskQueue":"archive","WorkerID":"ganymede-app-api-deploy-864bdbf998-s5d6q","time":"2023-12-27T18:40:10+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"chat-download","WorkerID":"ganymede-app-api-deploy-864bdbf998-s5d6q","time":"2023-12-27T18:40:11+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"chat-render","WorkerID":"ganymede-app-api-deploy-864bdbf998-s5d6q","time":"2023-12-27T18:40:11+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"video-download","WorkerID":"ganymede-app-api-deploy-864bdbf998-s5d6q","time":"2023-12-27T18:40:11+01:00","message":"Started Worker"}
{"level":"info","Namespace":"default","TaskQueue":"video-convert","WorkerID":"ganymede-app-api-deploy-864bdbf998-s5d6q","time":"2023-12-27T18:40:12+01:00","message":"Started Worker"}

Pushed a new commit to the branch that detects if the DB should be seeded if there are no users. I've also removed the config options as that will no longer be needed. Testing this on my side now.

All good for me it seems. All the issues are now fixed. Thank you!

Sweet, thanks for helping troubleshoot! I'll publish a new release shortly.

Not entirely sure if this is connected, but not sure the 'to not break existing installs' part is working, at least not for me.
I use watchtower to keep things updated to :latest, and for the past week or so ganymede-api crashes on launch as follows:

Running on a Synology NAS. Haven't changed anything in the config/docker-compose for months.

-------------------------------------
User uid:    1024
User gid:    101
-------------------------------------
Version    : 
Git Hash   : 2a9ebe83aad91771a4e9e46da43ecc8e2ed54c1b
Build Time : 2023-12-27T21:07:21Z
2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF config file found at /data/config.json, loading 
2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z INF config file loaded: /data/config.json 
2023-12-30T21:23:53Z 2023-12-30T21:23:53Z DBG 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z DBG 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z DBG 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z DBG config file loaded: /data/config.json 
2023/12/30 21:23:53 INFO  No logger configured for temporal client. Created default one.
2023-12-30T21:23:53Z 2023-12-30T21:23:53Z panic 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z panic 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z panic 2023-12-30T21:23:53Z 2023-12-30T21:23:53Z panic Unable to create client: failed reaching server: last connection error: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused" 
panic: Unable to create client: failed reaching server: last connection error: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:7233: connect: connection refused"
goroutine 1 [running]:
github.com/rs/zerolog/log.Panic.(*Logger).Panic.func1({0xc0005ee180?, 0x0?})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/log.go:396 +0x27
github.com/rs/zerolog.(*Event).msg(0xc0003c2850, {0xc0005ee180, 0xb7})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/event.go:158 +0x2c2
github.com/rs/zerolog.(*Event).Msgf(0xc0003c2850, {0x1891ce4?, 0x0?}, {0xc0006499b0?, 0x0?, 0x0?})
	/go/pkg/mod/github.com/rs/zerolog@v1.31.0/event.go:131 +0x46
github.com/zibbp/ganymede/internal/temporal.InitializeTemporalClient()
	/app/internal/temporal/client.go:26 +0x145
main.Run()
	/app/cmd/server/main.go:85 +0xd9
main.main()
	/app/cmd/server/main.go:124 +0x4d4
2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF Starting worker with config: {MAX_CHAT_DOWNLOAD_EXECUTIONS:5 MAX_CHAT_RENDER_EXECUTIONS:3 MAX_VIDEO_DOWNLOAD_EXECUTIONS:5 MAX_VIDEO_CONVERT_EXECUTIONS:3 TEMPORAL_URL:temporal:7233} 
2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF config file found at /data/config.json, loading 
2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z INF config file loaded: /data/config.json 
2023-12-30T21:23:58Z 2023-12-30T21:23:58Z DBG 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z DBG 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z DBG 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z DBG config file loaded: /data/config.json 
2023-12-30T21:23:58Z 2023-12-30T21:23:58Z fatal 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z fatal 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z fatal 2023-12-30T21:23:58Z 2023-12-30T21:23:58Z fatal Unable to create client: failed reaching server: last connection error: connection error: desc = "transport: Error while dialing: dial tcp: lookup temporal on 127.0.0.11:53: no such host" 
usermod: no changes

Edit: Adding the new sections to the docker-compose sorted things out, but the fallback didn't seem to work for me at least

Ya I ended up not bundling it in the API container to make it easier on myself and keep everyone running the same setup.

I have a question with this update: how can I restart part of the process of an item in the queue, since the "restart" button in not there anymore? I had to restart Ganymede at some point, and some of the items in the queue does not have a workflow anymore, some of them still need to convert the video, others need to move the converted video and some need to start the download.

Screenshot_20231231_184306
Screenshot_20231231_184518

I have a question with this update: how can I restart part of the process of an item in the queue, since the "restart" button in not there anymore? I had to restart Ganymede at some point, and some of the items in the queue does not have a workflow anymore, some of them still need to convert the video, others need to move the converted video and some need to start the download.

You can restart individual tasks, or the "parent" task of that individual workflow. To accomplish this, find the workflow on the Workflows page. As these haven't completed yet they might still be on the "Active" page, or check the "Closed" page.
image
Then click it to see more information and click the restart button at the top.
image

The UI/UX of the workflows isn't great and I'm still figuring out how to improve on it. You'll also only be able to see workflows from the past day. I'm still working on fixing that.

I have a question with this update: how can I restart part of the process of an item in the queue, since the "restart" button in not there anymore? I had to restart Ganymede at some point, and some of the items in the queue does not have a workflow anymore, some of them still need to convert the video, others need to move the converted video and some need to start the download.

You can restart individual tasks, or the "parent" task of that individual workflow. To accomplish this, find the workflow on the Workflows page. As these haven't completed yet they might still be on the "Active" page, or check the "Closed" page. image Then click it to see more information and click the restart button at the top. image

The UI/UX of the workflows isn't great and I'm still figuring out how to improve on it. You'll also only be able to see workflows from the past day. I'm still working on fixing that.

That's what I guessed, but the items I still have in the queue do not have a workflow anymore for an unknown reason.
Here I have 15 items remaining in the queue, but I have the workflow of one item, that is already finished in this case:
image
image
image

That's because the queue items are over a day old and the workflow items currently get removed after a day. I've got a fix ready to keep workflows around for significantly longer for the temporal image but need to make some changes for the API and frontend. You'll need to re-archive the queue items.

Alright, I will do the remaining of the tasks by hand and manually set them as completed for the ones with the download completed (I don't want to redownload 20+GB per VOD, thank you for logging the ffmpeg commands executed btw).
For the rest, I will re-archive.