raspberrypi / picamera2

New libcamera based python library

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[HOW-TO] Delay preview by specified number of seconds

chrizbee opened this issue · comments

Question / Problem
I am trying to delay the preview by a specified number of seconds (or frames) (~20s). I don't care about recording.
Is this possible with picamera2 (maybe using CircularOutput)?

Specs

  • Raspberry Pi 4 (4 and 8GB models)
  • Raspberry Pi OS Lite 64bit
  • DRM preview

At 1920 x 1080 x 3 x 30fps x 20 = 3.7GB the buffers should fit in RAM (without blanking).
I don't really want to do encoding to write the data to a file on the sd card...

Alternatives

  1. I've already written and built rpicam-delay which is based on rpicam-hello where I put incoming requests into a queue and then start processing requests from this queue after the time I've set has passed. This way I can achieve a bit more than a second (after increasing buffer count) before running into all kinds of errors.
  2. Using GStreamer with libcamerasrc -> queue min-threshold-time=x ns -> kmssink I can achieve almost a second of delay. Increasing the threshold time further results in a pipeline that is "stuck" (showing only the first undelayed frame).
  3. So picamera2 is currently my best bet before using libcamera directly...

Hi, interesting problem. Here are a few random thoughts off the top of my head.

  1. You're not going to be able to allocate enough camera buffers to get anywhere near 20s. The camera buffers are contiguous dmabufs, so there's a pretty limited supply of them.
  2. This means that to store 20s worth of buffers you're going to have to copy them. If you don't want to encode them, using "YUV420" will be much better because that's half the data. Easier to store, and easier to copy at the rate that you need. ("YUV420" should be more efficient even if you do end up encoding them too.)
  3. You'd have to write your own display code, I think, or use someone else's. The code in Picamera2 displays camera dmabufs, and is not expecting copies of buffers in user memory. Both DRM and GLES support YUV420 on the Pi, in case that helps.
  4. I do wonder about having two processes on either end of a fifo, one encoding, one decoding and displaying. Note that you can write fifos and files like this to /dev/shm which is a memory file system (so no thrashing the SD card, and much faster too). The encode half could possibly be rpicam-vid, and the decode/display end perhaps a standard application like ffplay or vlc.
  1. Makes sense. That explains both rpicam-delay and gstreamer queue.
  2. I was planning to use either yuv422 or yuv420 if rgb888 data does not fit.
  3. Do you have a link for docs on how to write a DRM preview? I can't seem to find anything... I've also used libcamera + Qt + OpenGL on a Pi 4 before but I thought there is a "lighter" solution.
  4. This seems to be the quickest method to achieve what I want - thank you!

YUV422 may have some benefits if you can avoid h264 compression. If you do need that, then it would get converted to YUV420 anyway.

I'm afraid I don't know a lot about DRM. (One of my regular complaints is that you can usually find some Linux kernel API documentation, but the "yeah, now how do I actually use it in an application" bit just doesn't exist). There are some Raspberry Pi graphics related forums, I think, so you may find more knowledgeable folk there.

I ended up using rpicam-vid and ffplay as suggested (4).

#!/bin/bash
rpicam-vid -n -t 0 --width 1920 --height 1080 -o /tmp/delayed.data &
sleep 20
ffplay -fs -i /tmp/delayed.data
pkill rpicam-vid

I just used /tmp/ to store the data since that was fast enough and I didn't have to worry about RAM size.

Thanks a bunch @davidplowman 😄