Stream latency pressure not getting fired
abdulsiddiqi opened this issue · comments
I've run into a scenario where we are consistently publishing frames successfully to a stream but the buffer overflow callback and drop frame callbacks are getting fired. I looked at the exact moment when the content view duration went passed the max_latency duration and no stream latency pressure callback was fired around that time segment. This is for a REAL_TIME configured stream.
Will resetting the connection at buffer overflow callback be ideal in this scenario? Will the stream recover in such a way that the stream publishes at a faster rate than the one defined in StreamInfo to make up for the lost time and to keep some extra space in the buffer for future hiccups?
Additionally, I was wondering what is the difference in behavior for NON_REALTIME vs REALTIME?
There are multiple questions here.
- I am not sure why latency pressure callback is not firing. Check that you have realtime mode specified, the maxLatency is specified that's non-zero and smaller than the overall buffer duration, ensure the callback is non NULL. In Java code it shouldn't be NULL but I am not sure whether there have been any changes on your side.
- Buffer overflow is a static 85% of the buffer so you could indeed use that to trigger a connection reset. Make sure you add some logic to "ignore" the callback as the act of resetting the connection will still trigger the callbacks. The C SDK Continuous Retry Policy callbacks do just that.
- The recovered stream will stream faster than real-time if the bandwidth allows to catch up with the head.
- The rate defined in StreamInfo is to optimize the content view (or temporal view) allocations. The streaming is always geared towards maximizing the bandwidth.
- There are 3 modes defined currently but only two are used - the realtime and the offline. In offline mode, the media thread producing the frames will be blocked awaiting the availability (or a timeout will happen on the putFrame call).