badges / shields

Concise, consistent, and legible badges in SVG and raster format

Home Page:https://shields.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issues deploying to staging

chris48s opened this issue ยท comments

๐Ÿ“‹ Description

Basically, what happened here was:

  • I reviewed #10068 using a review app - worked fine
  • I merged it
  • I tried to deploy sha256:e77392c2f4cfa1a9da6a7754fd26911f560bc522e3fc0d56ee7386b910b0c5b1 to staging. Staging fails to serve the app with could not find a good candidate within 90 attempts at load balancing
  • I checked the app locally - worked fine
  • I tried deploying some previous versions to staging. They were all fine
  • I tried sha256:e77392c2f4cfa1a9da6a7754fd26911f560bc522e3fc0d56ee7386b910b0c5b1 again. Back to could not find a good candidate within 90 attempts at load balancing
  • I reverted the upgrade in #10075
  • Deployed fine to staging
  • Deployed to production

I don't really have time to look into it right now, but we've got staging set to run a 256Mb VM. Prod uses 512Mb VMs. Review apps use whatever the default is (probably more than 256Mb).

My theory is that this update might now be using all the memory we have available on staging ๐Ÿ˜ฎ ? That would explain why it worked locally but not on staging. I'll circle back to this when I have a chance.

Hmm. Not sure this is related to this package specifically.

Just merged
caea759
went to deploy it to staging

could not find a good candidate within 90 attempts at load balancing again

tried bumping staging to 512Mb RAM

still failing

rolled back to the previous version. All is well.

Currently stumped :/

Error is could not find a good candidate within 90 attempts at load balancing

  • only some docker hashes seem to trigger this behaviour
    • sha256:e77392c2f4cfa1a9da6a7754fd26911f560bc522e3fc0d56ee7386b910b0c5b1
    • sha256:42974bb5b7da023a8277fb7da86db1f884f5f8177d95f3ba8d14dd97184c9d35
      are known bad. I don't know of any other bad ones
  • SSH into an "unhealthy" instance, install curl. curl http://127.0.0.1 and curl http://0.0.0.0 work fine
  • Tried giving it MOAR memory, no difference
  • Tried fiddling around with the tcp_checks settings (matched staging to prod, matched staging to review apps). No difference. Massively increased the timeout and grace_period. No differece.
  • Crazy thing is, if I deploy one of the "bad" images (to staging) and flyctl scale count 2, the app suddenly becomes accessible. flyctl scale count 1 and it is back to could not find a good candidate within 90 attempts at load balancing
  • Sometimes I can get a machine deployed from one of the "bad" images to start serving traffic by restarting it, but not consistently.
  • Deploying once of the "bad" images, running flyctl scale count 2 and then deleting the first machine that was deployed (keeping the second) consistently results in one working machine serving traffic.
  • Disabling REQUIRE_CLOUDFLARE has no bearing on it. Disable that and hit https://shields-io-staging.fly.dev/ - same behaviour
  • This feels like fly.io being wierd and flaky, but why do some specific images trigger this behaviour while others work perfectly?

The one thing I haven't tried is just yeeting one of the "bad" images to production. My instinct is they'd probably work because we're running multiple instance, but I'm reluctant to just try that without understanding wtf is going on first.

Utterly baffling.

Some things I haven't tried yet:

  • Deploy from GHCR or build a completely new image from the same commits + push to fly registry (to eliminate DockerHub)
  • Make a completely clean app on fly using the staging settings - does this reproduce?
  • Move the staging app to a different region
  • Ritual sacrifice ๐Ÿ

Ritual sacrifice ๐Ÿ

I think i can pitch in on this one ๐Ÿ‘

Thanks. When I circle back to this I will check that out.

If I end up against a brick wall again, we do have email support with fly so raising a support ticket is another option once I have time to follow up.

OK. So I just tried deploying sha256:42974bb5b7da023a8277fb7da86db1f884f5f8177d95f3ba8d14dd97184c9d35 to staging again. Worked fine.

I think I am going to assume that there is nothing wrong with any of the images and we were triggering some kind of issue on fly's side.

Assuming I don't run into this again trying to deploy later I will assume this is fixed and close this.

I've now successfully run staging and production deploys.
Going to close this.
Glad I didn't spend more time banging my head against this brick wall when I first hit it.

Indeed! Glad it's sorted