spack / spack-infrastructure

Spack Kubernetes instance and services running there (GitLab, CDash, spack.io)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

staging gitlab needs more runners

scottwittenburg opened this issue · comments

Pushing develop -ish spack to staging a bit ago reminded me we don't have the same runners available in both, you can see the child pipelines that are stuck because of it here.

I don't think we're going to be able to get complete parity in runners between production and staging, so @danlamanna, @kwryankrattiger, and I have been trying to think of ways to gracefully handle when staging doesn't have the same runners. Some points to consider:

  • we shouldn't need to change the spack codebase to push to staging and test there
  • it's not clear whether the "stand-in" jobs when we don't have runners should pass or fail
    • on one hand, we may want to pass so subsequent jobs run (exercise the pipeline functionality as far as possible)
    • on the other hand, subsequent jobs may rely on artifacts/results of previous jobs

One possibility is to create a runner type that has the extraneous tags we need to emulate but cannot support (A100/M100/-gpu/-cray/etc.) and inject a pre-build script that somehow makes CI look like it passed or, possibly, always generate nothing to rebuild.

This could be setting the prune-depth to prune everything or ci rebuild to just early exit status 0 with a message "Skipped" or something else like that.

I am unsure if the pre-build script persists it's environment to the before_script/script sections so possibly that is not an option.

We could also just set special environment variables on the project, but I think that may be too heavy handed for the level of control we would want for testing.