cloudbase / garm

GitHub Actions Runner Manager

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pools of the same image and flavor cannot exist on the same provider

Zappycobra opened this issue · comments

To provide some context:

We have several runner groups that we make public to all but restrict the runner group to specific reusable workflows so that only those workflows can run in that runner group's runners. We could provide just one pool and consolidate usage (min, max, idle), but we want to control the usage of runners available to each of those runner groups separately due to prioritization reasons. We currently are forced to tag the same image several times in all of our providers, but we would like this not to be the case.

We have a need to have multiple pools with the same image and flavor on the same provider. Due to below we cannot have the same image and flavor combination, is there a need for the else in this check to happen:

if _, err := s.getEntityPoolByUniqueFields(tx, entity, newPool.ProviderName, newPool.Image, newPool.Flavor); err != nil {
    if !errors.Is(err, runnerErrors.ErrNotFound) {
        return errors.Wrap(err, "checking pool existence")
    }
} else {
    return runnerErrors.NewConflictError("pool with the same image and flavor already exists on this provider")
}

Please advise on workaround. We have a fork of this repo, would it be safe to remove the check?

Hi @Zappycobra,

The reasoning was that it would be easier to scale the pool up rather than create an identical one, but this was before the adition of runner groups.

It should be safe to remove that check, or expand it to include the runner group as well.

I can create a PR soon to address this.

@Zappycobra

I opened a PR here:

Could you give it a shot and let me know if it fixes the issue for you. Please note, the latest main branch also includes a big change to garm in terms of github credentials management . If you don't want to deal with that now (it should migrate credentials to the DB automatically), you will have to cherry-pick this change into the release/v0.1 branch and build that.

Feel free to reopen this if it's still not fixed by the PR I mentioned