downloaded go modules are not being picked up by the go interpreter when bom generate runs
sandipanpanda opened this issue · comments
What happened:
bom does not leverage the local go cache to look for dependency data while generating SBOM in Cilium image build actions.
Generating SBOM describing the source in the Cilium repository
using bom takes, on average, 10 minutes. As a result, the CI
build time increases by 30 minutes if we generate an SBOM
describing the source for all three CI images in Image CI Build
and the CI ultimately fails, throwing an error that no space
is left on the runner.
In theory, if you run "bom generate" in the same environment where you are building (especially after building), all modules should be
there already downloaded, and bom can reuse them. But this does not
happen. One thing that bom will not do is download stuff into your
go directory. If a module is missing, bom will download it to
/tmp/spdx/gomod-scanner/, look at it there, and remove it. Even after
performing a "go mod download" before running "bom generate", the
downloaded modules are not being picked up by the go interpreter
when bom runs.
The downloaded modules are not being picked up by the go interpreter when bom runs: https://github.com/cilium/cilium/actions/runs/3490449396/jobs/5841895937#step:23:1755 for this workflow file.
What you expected to happen:
If bom generate is run in the same environment where you are building (especially after building), all modules should be there already downloaded and bom can reuse them.
Anything else we need to know?:
Discussion on this in K8s slack linked here.
Attaching log archive for the workflow run, as it will eventually expire and will be unable from GHA: logs_877851.zip
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale