[Bug]: Unhandled exception. DotNet.Testcontainers.Containers.ResourceReaperException: Initialization has been cancelled
muhaook opened this issue · comments
Testcontainers version
3.8.0
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host arch
x86
.NET version
8.0.204
Docker version
$ podman --version
podman version 4.9.4-rhel
[mikmu@apmp012 net6.0]$ podman version
Client: Podman Engine
Version: 4.9.4-rhel
API Version: 4.9.4-rhel
Go Version: go1.21.9 (Red Hat 1.21.9-2.el9_4)
Built: Fri May 3 09:46:34 2024
OS/Arch: linux/amd64
Docker info
$ podman info
host:
arch: amd64
buildahVersion: 1.33.7
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.7-1.el9_2.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.7, commit: ee2f8dd0a09933610c92940874094961cd55a4bf'
cpuUtilization:
idlePercent: 98.56
systemPercent: 0.4
userPercent: 1.04
cpus: 4
databaseBackend: boltdb
distribution:
distribution: ol
variant: server
version: "9.2"
eventLogger: file
freeLocks: 2048
hostname: apmp012
idMappings:
gidmap:
- container_id: 0
host_id: 8500
size: 1
- container_id: 1
host_id: 10000
size: 65536
uidmap:
- container_id: 0
host_id: 1634312
size: 1
- container_id: 1
host_id: 10000
size: 65536
kernel: 5.15.0-202.135.2.el9uek.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 6718337024
memTotal: 33173327872
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.5.0-2.el9.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.5.0
package: netavark-1.5.0-3.el9_2.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.5.0
ociRuntime:
name: crun
package: crun-1.8.4-1.el9_2.x86_64
path: /usr/bin/crun
version: |-
crun version 1.8.4
commit: 5a8fa99a5e41facba2eda4af12fa26313918805b
rundir: /run/user/1634312/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
pasta:
executable: ""
package: ""
version: ""
remoteSocket:
exists: false
path: /run/user/1634312/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /bin/slirp4netns
package: slirp4netns-1.2.0-3.el9.x86_64
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.2
swapFree: 18973184000
swapTotal: 18974998528
uptime: 168h 24m 26.00s (Approximately 7.00 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- container-registry.oracle.com
- docker.io
store:
configFile: /home/mikmu/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/mikmu/.local/share/containers/storage
graphRootAllocated: 421604532224
graphRootUsed: 21684621312
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 11
runRoot: /run/user/1634312/containers
transientStore: false
volumePath: /home/mikmu/.local/share/containers/storage/volumes
version:
APIVersion: 4.9.4-rhel
Built: 1714729594
BuiltTime: Fri May 3 09:46:34 2024
GitCommit: ""
GoVersion: go1.21.9 (Red Hat 1.21.9-2.el9_4)
Os: linux
OsArch: linux/amd64
Version: 4.9.4-rhel
What happened?
Container testcontainers/ryuk:0.6.0 failed to start with exception:
Unhandled exception. DotNet.Testcontainers.Containers.ResourceReaperException: Initialization has been cancelled
Relevant log output
output of testcontainers:
[testcontainers.org 00:00:00.28] Connected to Docker:
Host: tcp://127.0.0.1:8080/
Server Version: 4.9.4-rhel
Kernel Version: 5.15.0-202.135.2.el9uek.x86_64
API Version: 1.41
Operating System: ol
Total Memory: 30.90 GB
[testcontainers.org 00:00:00.48] Docker container 7d9e658d6a55 created
[testcontainers.org 00:00:00.53] Start Docker container 7d9e658d6a55
[testcontainers.org 00:00:00.75] Wait for Docker container 7d9e658d6a55 to complete readiness checks
[testcontainers.org 00:00:00.75] Docker container 7d9e658d6a55 ready
[testcontainers.org 00:01:00.78] Stop Docker container 7d9e658d6a55
Unhandled exception. DotNet.Testcontainers.Containers.ResourceReaperException: Initialization has been cancelled.
at DotNet.Testcontainers.Containers.ResourceReaper.GetAndStartNewAsync(Guid sessionId, IDockerEndpointAuthenticationConfiguration dockerEndpointAuthConfig, IImage resourceReaperImage, IMount dockerSocket, ILogger logger, Boolean requiresPrivilegedMode, TimeSpan initTimeout, CancellationToken ct) in /_/src/Testcontainers/Containers/ResourceReaper.cs:line 245
at DotNet.Testcontainers.Containers.ResourceReaper.GetAndStartNewAsync(Guid sessionId, IDockerEndpointAuthenticationConfiguration dockerEndpointAuthConfig, IImage resourceReaperImage, IMount dockerSocket, ILogger logger, Boolean requiresPrivilegedMode, TimeSpan initTimeout, CancellationToken ct) in /_/src/Testcontainers/Containers/ResourceReaper.cs:line 257
at DotNet.Testcontainers.Containers.ResourceReaper.GetAndStartDefaultAsync(IDockerEndpointAuthenticationConfiguration dockerEndpointAuthConfig, ILogger logger, Boolean isWindowsEngineEnabled, CancellationToken ct) in /_/src/Testcontainers/Containers/ResourceReaper.cs:line 149
at DotNet.Testcontainers.Clients.TestcontainersClient.RunAsync(IContainerConfiguration configuration, CancellationToken ct) in /_/src/Testcontainers/Clients/TestcontainersClient.cs:line 294
at DotNet.Testcontainers.Containers.DockerContainer.UnsafeCreateAsync(CancellationToken ct) in /_/src/Testcontainers/Containers/DockerContainer.cs:line 424
at DotNet.Testcontainers.Containers.DockerContainer.StartAsync(CancellationToken ct) in /_/src/Testcontainers/Containers/DockerContainer.cs:line 288
at Program.<Main>$(String[] args) in /scratch/gitrepo/apm-agent-dotnet-sample/MyPrototype/MyPrototype/Program.cs:line 27
at Program.<Main>(String[] args)
Aborted
Additional information
system messages about podman:
[root@apmp012 log]# tail -f messages
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="IdleTracker:closed 1m+0h/38t connection(s)" X-Reference-Id=0xc00041a128
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="IdleTracker:new 0m+0h/38t connection(s)" X-Reference-Id=0xc00092a638
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="IdleTracker:active 0m+0h/39t connection(s)" X-Reference-Id=0xc00092a638
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="Looking up image "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" in local containers storage"
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="Trying "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" ..."
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130""
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="Found image "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" as "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" in local containers storage"
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="Found image "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" as "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130)"
May 15 07:53:38 apmp012 podman[342275]: 127.0.0.1 - - [15/May/2024:07:53:38 +0000] "GET /containers/7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa/json HTTP/1.1" 200 5061 "" "Docker.DotNet tc-dotnet/3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8"
May 15 07:53:38 apmp012 podman[342275]: time="2024-05-15T07:53:38Z" level=debug msg="IdleTracker:closed 1m+0h/39t connection(s)" X-Reference-Id=0xc00092a638
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="IdleTracker:new 0m+0h/39t connection(s)" X-Reference-Id=0xc00041a138
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="IdleTracker:active 0m+0h/40t connection(s)" X-Reference-Id=0xc00041a138
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Looking up image "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" in local containers storage"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Trying "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" ..."
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130""
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Found image "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" as "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" in local containers storage"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Found image "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" as "71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@71009a3edde7c6988eb3a3b51c9dc3cacc154494ca7e5e33ed567a38bf75c130)"
May 15 07:54:38 apmp012 podman[342275]: 127.0.0.1 - - [15/May/2024:07:54:38 +0000] "GET /containers/7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa/json HTTP/1.1" 200 5061 "" "Docker.DotNet tc-dotnet/3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="IdleTracker:closed 1m+0h/40t connection(s)" X-Reference-Id=0xc00041a138
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="IdleTracker:new 0m+0h/40t connection(s)" X-Reference-Id=0xc00092a648
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="IdleTracker:active 0m+0h/41t connection(s)" X-Reference-Id=0xc00092a648
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Starting parallel job on container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Stopping ctr 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa (timeout 15)"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Stopping container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa (PID 342819)"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Sending signal 15 to container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa"
May 15 07:54:38 apmp012 conmon[342816]: conmon 7d9e658d6a55f3c2bdbc : container 342819 exited with status 2
May 15 07:54:38 apmp012 systemd[1]: libpod-7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa.scope: Deactivated successfully.
May 15 07:54:38 apmp012 podman[342275]: 2024-05-15 07:54:38.798371604 +0000 UTC m=+1122.356596389 container died 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa (image=ghcr.io/muhaook/testcontainers/ryuk:0.6.0, name=testcontainers-ryuk-a95fc84f-cb82-4b4e-83cd-e4d68016574f, org.testcontainers.lang=dotnet, org.testcontainers.resource-reaper-session=00000000-0000-0000-0000-000000000000, org.testcontainers.ryuk=true, org.testcontainers.session-id=a95fc84f-cb82-4b4e-83cd-e4d68016574f, org.testcontainers.version=3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8, org.testcontainers=true)
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Container "7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa" state changed from "stopping" to "stopped" while waiting for it to be stopped: discontinuing stop procedure as another process interfered"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Cleaning up container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa"
May 15 07:54:38 apmp012 podman[342275]: time="2024-05-15T07:54:38Z" level=debug msg="Tearing down network namespace at /run/netns/netns-9d522106-0cc2-5b90-0ffc-94142349f98c for container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa"
May 15 07:54:38 apmp012 podman[342848]: [DEBUG netavark::commands::teardown] "Tearing down.."
May 15 07:54:38 apmp012 podman[342848]: [INFO netavark::firewall] Using iptables firewall driver
May 15 07:54:38 apmp012 kernel: podman0: port 2(veth1) entered disabled state
May 15 07:54:38 apmp012 /usr/bin/podman[342842]: time="2024-05-15T07:54:38Z" level=info msg="Using boltdb as database backend"
May 15 07:54:38 apmp012 /usr/bin/podman[342842]: time="2024-05-15T07:54:38Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
May 15 07:54:38 apmp012 /usr/bin/podman[342842]: time="2024-05-15T07:54:38Z" level=info msg="Setting parallel job count to 13"
May 15 07:54:38 apmp012 kernel: device veth1 left promiscuous mode
May 15 07:54:38 apmp012 kernel: podman0: port 2(veth1) entered disabled state
May 15 07:54:39 apmp012 podman[342848]: [DEBUG netavark::commands::teardown] "Teardown complete"
May 15 07:54:39 apmp012 systemd[1]: run-netns-netns\x2d9d522106\x2d0cc2\x2d5b90\x2d0ffc\x2d94142349f98c.mount: Deactivated successfully.
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=debug msg="Successfully cleaned up container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa"
May 15 07:54:39 apmp012 systemd[1]: scratch-podman-storage-overlay\x2dcontainers-7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa-userdata-shm.mount: Deactivated successfully.
May 15 07:54:39 apmp012 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa-userdata-shm.mount: Deactivated successfully.
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=debug msg="Unmounted container "7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa""
May 15 07:54:39 apmp012 systemd[1]: var-lib-containers-storage-overlay-aaaaa75a8aecb3cf9beb64ab88feb44c708c1b49fae912c6a5d5accf084d092f-merged.mount: Deactivated successfully.
May 15 07:54:39 apmp012 podman[342275]: 2024-05-15 07:54:39.040109191 +0000 UTC m=+1122.598333957 container cleanup 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa (image=ghcr.io/muhaook/testcontainers/ryuk:0.6.0, name=testcontainers-ryuk-a95fc84f-cb82-4b4e-83cd-e4d68016574f, org.testcontainers.version=3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8, org.testcontainers=true, org.testcontainers.lang=dotnet, org.testcontainers.resource-reaper-session=00000000-0000-0000-0000-000000000000, org.testcontainers.ryuk=true, org.testcontainers.session-id=a95fc84f-cb82-4b4e-83cd-e4d68016574f)
May 15 07:54:39 apmp012 podman[342275]: 127.0.0.1 - - [15/May/2024:07:54:38 +0000] "POST /containers/7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa/stop?t=15 HTTP/1.1" 204 0 "" "Docker.DotNet tc-dotnet/3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8"
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=debug msg="IdleTracker:closed 1m+0h/41t connection(s)" X-Reference-Id=0xc00092a648
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=debug msg="IdleTracker:new 0m+0h/41t connection(s)" X-Reference-Id=0xc00041a6a8
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=debug msg="IdleTracker:active 0m+0h/42t connection(s)" X-Reference-Id=0xc00041a6a8
May 15 07:54:39 apmp012 podman[342842]: 2024-05-15 07:54:39.049876853 +0000 UTC m=+0.251262571 container remove 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa (image=ghcr.io/muhaook/testcontainers/ryuk:0.6.0, name=testcontainers-ryuk-a95fc84f-cb82-4b4e-83cd-e4d68016574f, org.testcontainers.version=3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8, org.testcontainers=true, org.testcontainers.lang=dotnet, org.testcontainers.resource-reaper-session=00000000-0000-0000-0000-000000000000, org.testcontainers.ryuk=true, org.testcontainers.session-id=a95fc84f-cb82-4b4e-83cd-e4d68016574f)
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=info msg="Request Failed(Internal Server Error): container 7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa does not exist in database: no such container"
May 15 07:54:39 apmp012 podman[342275]: 127.0.0.1 - - [15/May/2024:07:54:39 +0000] "GET /containers/7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa/json HTTP/1.1" 500 178 "" "Docker.DotNet tc-dotnet/3.8.0+14f9ad92ece7b5539aed0c8f2f7c5415ab97aab8"
May 15 07:54:39 apmp012 podman[342275]: time="2024-05-15T07:54:39Z" level=debug msg="IdleTracker:closed 1m+0h/42t connection(s)" X-Reference-Id=0xc00041a6a8
May 15 07:54:39 apmp012 systemd[1]: libpod-conmon-7d9e658d6a55f3c2bdbcf6e03ba861cebbddec047d0cebca98f9a4af0a070bfa.scope: Deactivated successfully.
May 15 07:54:55 apmp012 systemd[1]: ade_autofs-ade_infra.mount: Deactivated successfully.
Running Testcontainers together with Podman usually requires additional configurations. It does not run out-of-the-box due to different implementations/behaviors of Podman. It is been a while since I worked on improving support for other container runtimes. I am not sure what changes are necessary today, but maybe this issue helps: #876. However, in case you make any progress setting up Podman together with Testcontainers (including running Ryuk), some documentation about what is necessary to configure it would be awesome and certainly will help other developers too.
@cristianrgreco did a great job a while ago and spent a lot of time and effort adding much better support to the Node.js implementation ❤️. Perhaps this (testcontainers/testcontainers-node#494) initial PR helps to figure out the necessary configuration too (maybe even improving the .NET out-of-the-box support).
Edit:
This is the branch I started a while ago to run the tests against other container runtimes too. The idea was to run jobs for Docker, Colima, Podman, etc. The install-podman/install.sh
action contains the necessary steps from back then to run it on the pipeline.
Thanks @HofmeisterAn , it started working after running:
systemctl enable --now --user podman.socket