V0.13 is very lag
mengru-lotusflare opened this issue · comments
Hey @mengru-lotusflare,
In that release we did a major change of how UI client interacts with its backend and now its done not via grpc-web but with simple short polling scheme. I believe that's why you might experience all that.
Could you please try this version? Let's see if that could improve your situation..
let me have a try
@yandzee I have tried with hubble ui frontend quay.io/cilium/hubble-ui-ci:f41966374314aa145dfb8fcf78e4f80a45461fb3@sha256:4db5ed2dc6a1eee84235dd66c0f106b25717ee4be5ecab2b94b776eba7722a51
and hubble ui backend quay.io/cilium/hubble-ui-backend-ci:f41966374314aa145dfb8fcf78e4f80a45461fb3@sha256:35459ec5a39e09854a9ab2fb46d18580b151f65c0b2fa5350a5710bce2e6a861
with cilium helm v0.13.12
. but the hubble-ui pod can be start
backend:
Port: 8090/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Message: exec /usr/bin/backend: exec format error
@mengru-lotusflare exec format error
error means that backend binary isn't compatible with the CPU architecture. Please try to not specify sha256:*
parts for images
Hi @geakstr the exec format error
problems has been fixed, and the ui is rendered sometimes faster than before, sometimes can not load data for a long time, sometimes can only load part of the data.
If we can make this flow buffer size and duration configurable, it will be much better.
Pushed another commit to that PR, now you can try to tune it manually by setting FLOWS_THROTTLE_DELAY
and FLOWS_THROTTLE_SIZE
environment variables
Btw, could you please share the logs from the UI backend container? Because I don't really see how you might get that red notification for stream reconnecting..