ossrs / srs

SRS is a simple, high-efficiency, real-time video server supporting RTMP, WebRTC, HLS, HTTP-FLV, SRT, MPEG-DASH, and GB28181.

Home Page:https://ossrs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support ffmpeg png output as stream for thumbnail creation?

tmacneil opened this issue · comments

I was experimenting with the ffmpeg transcode engine in srs and noticed that only the libx264 vcodec was supported for output. I was trying to setup a transcode configuration to create a thumbnail every 60 seconds but the only file output I could create was flv. Do you have any plans to allow PNG output as well so that web browser compatible images could be created during the stream?

Yep, transcode should support png output.
Can u tell me the command to transcode rtmp to png by ffmpeg?

I use the following ffmpeg command to create a single frame png thumbnail
from an rtmp stream once every 10s. It creates a file for every frame:

/usr/bin/ffmpeg -i rtmp://127.0.0.1:1935/live?vhost=dev/stream -vf fps=1/6
-vcodec png -an -f image2 -y /tmp/thumb/live/stream_%06d.png

tmacneil@ubuntu-VirtualBox:/tmp/thumb/live$ ll *.png
-rw-rw-r-- 1 tmacneil tmacneil 131554 Oct 16 09:01 stream_000001.png
-rw-rw-r-- 1 tmacneil tmacneil 129136 Oct 16 09:01 stream_000002.png
-rw-rw-r-- 1 tmacneil tmacneil 124447 Oct 16 09:01 stream_000003.png

tmacneil@ubuntu-VirtualBox:/tmp/thumb/live$ file stream_000001.png
stream_000001.png: PNG image data, 320 x 240, 8-bit/color RGB,
non-interlaced

Here's the command for creating jpg thumbnails instead of png:

/usr/bin/ffmpeg -i rtmp://127.0.0.1:1935/live?vhost=dev/stream -vf fps=1/6
-vcodec mjpeg -an -f image2 -y /tmp/thumb/live/stream_%06d.jpg

tmacneil@ubuntu-VirtualBox:/tmp/thumb/live$ ll *.jpg
-rw-rw-r-- 1 tmacneil tmacneil 7866 Oct 16 09:02 stream_000001.jpg
-rw-rw-r-- 1 tmacneil tmacneil 12129 Oct 16 09:02 stream_000002.jpg
-rw-rw-r-- 1 tmacneil tmacneil 12772 Oct 16 09:02 stream_000003.jpg

tmacneil@ubuntu-VirtualBox:/tmp/thumb/live$ file stream_000001.jpg
stream_000001.jpg: JPEG image data, JFIF standard 1.02, comment:
"Lavc56.1.100"

I use the -ss and -vframes options to pull just a single frame after at a
certain point in the stream and then exit. For example:

/usr/bin/ffmpeg -i rtmp://127.0.0.1:1935/live?vhost=dev/stream -vf fps=1/6
-vcodec png -f image2 -an -y -ss 00:00:05.000 -vframes 1
/tmp/thumb/live/single_frame.png

tmacneil@ubuntu-VirtualBox:/tmp/thumb/live$ ll single*
-rw-rw-r-- 1 tmacneil tmacneil 130080 Oct 16 09:04 single_frame.png

tmacneil@ubuntu-VirtualBox:/tmp/thumb/live$ file single_frame.png
single_frame.png: PNG image data, 320 x 240, 8-bit/color RGB, non-interlaced

I haven't tested how SRS handles a graceful exit from ffmpeg. I'll test
that shortly, but if it sees the graceful exit as a good thing and doesn't
restart, you'd be able to get the single thumbnail if desired.

  • MacNeil

On Thu, Oct 15, 2015 at 10:29 PM, winlin notifications@github.com wrote:

Yep, transcode should support png output.
Can u tell me the command to transcode rtmp to png by ffmpeg?

—' translates to '> —' in English.
Reply to this email directly or view it on GitHub
#502 (comment)
.

TRANS_BY_GPT3

I see, what about the following workflow for snapshot?

  1. Encoder, for instance, FMLE or FFMPEG start to publish RTMP stream to SRS.
  2. SRS start a transcoder specified by transcode of config, where the transcoder can be a bash script.
  3. The bash script exec the ffmpeg to snapshot, which is the command of your single frame snapshot /usr/bin/ffmpeg -i rtmp://127.0.0.1:1935/live?vhost=dev/stream -vf fps=1/6 -vcodec png -f image2 -an -y -vframes 1 /tmp/thumb/live/single_frame.png
  4. The bash script sleep a while, for instance, 10s when ffmpeg process gracefully exit.
  5. The bash script wakeup and snapshot again.

The bash script can also exec ffmpeg by your multiple frame commad /usr/bin/ffmpeg -i rtmp://127.0.0.1:1935/live?vhost=dev/stream -vf fps=1/6 -vcodec png -an -f image2 -y /tmp/thumb/live/stream_%06d.png.

Is this ok for you?

SRS will start the transcoder when encoder start to publish stream, and terminate it when encoder stop to publish.
I think it's better for SRS to fork a bash script(or python, or another process) to manage the ffmpeg and restart or gracefully kill ffmpeg.
What's your advise?

That's not too different from the workaround I came up with this weekend.
I use the onPublish http callback to a golang app which kicks off a
background task that runs ffmpeg and manages the output. This actually
works out quite well for me because I can then do any image manipulation I
want with the image. I think I will stick with this approach because of
how much added functionality it gives me. Thank you!

On Sun, Oct 18, 2015 at 6:31 PM, winlin notifications@github.com wrote:

SRS will start the transcoder when encoder start to publish stream, and
terminate it when encoder stop to publish.
I think it's better for SRS to fork a bash script(or python, or another
process) to manage the ffmpeg and restart or gracefully kill ffmpeg.
What's your advise?

—' translates to '> —' in English.
Reply to this email directly or view it on GitHub
#502 (comment)
.

TRANS_BY_GPT3

So, for the snapshot feature, there are two ways:

  1. Use http-callback, like what you do.
  2. Use transcoder, which should support the workflow defined by #502 (comment)

Agree?

Agreed

On Sun, Oct 18, 2015, 11:03 PM winlin notifications@github.com wrote:

So, for the snapshot feature, I think SRS can:

  1. Use http-callback, like what you do.
  2. Use transcoder, which should support the workflow defined by #502
    (comment)
    #502 (comment)

Agree?

—' translates to '> —' in English.
Reply to this email directly or view it on GitHub
#502 (comment)
.

TRANS_BY_GPT3

For the api-server which handle the snapshot, it will generate some thumbnail and choose the best one, read 1aa4502

FIXED.