kubesphere fluent* images should be published to multiple registries
applike-ss opened this issue · comments
Is your feature request related to a problem? Please describe.
Docker hub has quite low pull limits, which can become a problem when having lots of cloud instances spawning up in one virtual private network.
Describe the solution you'd like
At the same time there is the benefit of less network hops and faster throughput when also hosting these images at hyperscalers docker registries like
- AWS ECR
- Googles Container registry
- Azures Container registry
In addition it would be nice to see them on:
- github container registry
- quay.io
Additional context
No response
@patrick-stephens is it possible to create a github action secret of ghcr for fluent-operator?
we can add github actions to push to ghcr as well
There is no need, use ephemeral tokens to push to ghcr.io.
That's the easiest to do: https://github.com/fluent/fluent-bit/blob/202da1374dffd98984b2aa56dd444d49bef1dbd5/.github/workflows/call-build-images.yaml#L74
All the others probably need a bit more set up and infra management on the OSS side so whilst it is a nice to have I would suggest just doing ghcr.io initially as it is well integrated into workflows.
Fluent Bit is only published to DockerHub and ghcr.io currently as well.
Hi!
I would like to work on this issue.
Current system is using makefile to build and push images to docker hub. Is the recommendation to use call-build-images.yaml#L74 in our workflows to solve this issue, with this we can have a step that will push to ghcr.io and other step to push to DockerHub. Do we also need to have Trivy + Dockle image scan.
Hi! I would like to work on this issue. Current system is using makefile to build and push images to docker hub. Is the recommendation to use call-build-images.yaml#L74 in our workflows to solve this issue, with this we can have a step that will push to ghcr.io and other step to push to DockerHub. Do we also need to have Trivy + Dockle image scan.
@sarathchandra24 Thanks for taking this up. That'll be great to have Trivy + Dockle image too.
Personally I would build once to ghcr.io then use skopeo to copy to all the relevant repos in a workflow.
This is what we do for staging -> release for Fluent Bit. It's a lot quicker and simpler than rebuilding multiple times. Plus you then have a single image technically copied rather than any weirdness with deltas in each build.
We are currently building image with debug too https://github.com/fluent/fluent-operator/blob/master/.github/workflows/build-fb-image.yaml#L53, do we also need to build these images and push to registry. I don't know about difference between these images. Can you please help me if this is required.
If you use skopeo you can do all of them with no change to the current process. Just have a separate job/workflow that does the skopeo copy after those are done.
I would say debug image is always useful.
@patrick-stephens I created the #1071 which address this issue. Can you please review!
Created the following PR: #1079 for the requested changes in old PR too.
I believe we might need to discuss on implementation for https://github.com/fluent/fluent-operator/blob/master/.github/workflows/build-fd-image.yaml; we are currently building two images one for arm and other for amd64. Do I have to approach in the same manner; I saw both the files are different arm image and amd image
The changes that I want to perform
FROM kubesphere/fluentd:v1.15.3-arm64-base -> kubesphere/fluentd:${BASE_IMAGE_TAG}
BASE_IMAGE_TAG; This one will be the version from Dockerfile.arm64.base;
I believe this would be a big change and I need to know if my approach is correct;
Created the following PR: #1079 for the requested changes in old PR too.
I believe we might need to discuss on implementation for https://github.com/fluent/fluent-operator/blob/master/.github/workflows/build-fd-image.yaml; we are currently building two images one for arm and other for amd64. Do I have to approach in the same manner; I saw both the files are different arm image and amd image
The changes that I want to perform FROM kubesphere/fluentd:v1.15.3-arm64-base -> kubesphere/fluentd:${BASE_IMAGE_TAG} BASE_IMAGE_TAG; This one will be the version from Dockerfile.arm64.base;
I believe this would be a big change and I need to know if my approach is correct;
So we build two separate images rather than a single multi-arch one? That is sub-optimal I agree, we can still keep the two tags and just make a new multi-arch manifest from them.
looking forward to the next tagged releases 😉
thanks for implementing this so quickly! 👍