bkeepers / webhook-proxy

Home Page:https://smee.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Server deployment options

tcbyrd opened this issue Β· comments

πŸ‘‹ @bkeepers @JasonEtco I really like this idea and think it's pretty maintainable from an application security aspect, but I wanted to open this issue to talk about how we might deploy it, given that it requires a server with a long-running connection that remains open. To summarize our conversation last night in Slack:

  • Heroku has a 55 second timeout, after which the client needs to reconnect
  • Lambda/API Gateway has a 30 second limit on HTTP connections, but even then the event emitter didn't respond to any clients within that 30 second window
  • Would be nice if the service had a way to persist received payloads, so the client could easily replay the last n number of events

All this points to the need for a real server, but obviously we don't want to lose the benefit of easy deployment. I don't know what others have experience with, but a few options come to mind (with the full awareness that I may be prematurely optimizing here):

  • Dokku installed on a VM: https://github.com/dokku/dokku
    • Pros: Very Heroku-like experience, based on Docker under the hood
    • Cons: Not sure how scalable this is to multiple machines
  • Setup a small Kube cluster (maybe just minikube on a single VM)
    • Pros: Still package as a container, but potentially easier to manage and scale in the long term
    • Cons: More initial setup, but at least I think it would be fun :)
  • Others? Maybe just a simple script in CI is all we need initially

For persistence, we can probably just use an in-memory cache like node-cache-manager without a store initially. Maybe we add Redis if we feel it's necessary.

I'm happy to work on any of these options and help maintain it. Just wanted to get my initial thoughts down after working with it.

I'm glad you're thinking about these things @tcbyrd, somebody should and deployment is usually way over my head.

For persistence, we can probably just use an in-memory cache

As it is now, we've eliminated all of the persistence things, even in-memory. The thought is that your browser will be the temporary cache, and you can redeliver from there for as long as you have it open. However, we may well want to add channel-based persistence in the future.

@tcbyrd I'm πŸ‘ for anything, as long as it supports CD, is stable, and I don't have to manage it. 😁

  • Setup a small Kube cluster (maybe just minikube on a single VM)

Clustering could be challenging given the current architecture. Client connections and channel mappings are only stored in memory, so that would need be replaced with a more robust routing layer that either sends all received events to all frontends, or maintains a routing between channels, frontends, and client connections.

Others? Maybe just a simple script in CI is all we need initially

If it were left up to me, I would probably look into Digital Ocean after I the 55 second timeout on Heroku becomes too annoying, but that's because it's the only other service I have much experience with.

@bkeepers Given our newly found workaround with Heroku, I'll close this.

ref: probot/smee-client#23