zmf2022 / FFmpeg-GPU-Demo

A FFmpeg based demo showing GPU's all-round capability in video processing

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FFmpeg GPU Demo

This demo shows a ffmpeg-based full-GPU rendering and inference pipeline. The code is based on ffmpeg release 4.4. The project is composed of several new filters in FFmpeg, clips rendered in real-time by these filters is demonstrated below.

rio demo

[Updates]

  • 2022/05 3DDFA filter is added.

Features

We are still actively developing this project, and we will continuously update this list. Please refer to the documents for details of each feature, including how to build and run them.

It should be noted that the purpose of this project is demonstration. As the name FFmpeg GPU Demo indicates, we would like to show you how to build such a pipeline, rather than building a product or turn-key solution.

Getting started

The project has complex dependencies, we offer a Dockerfile to quickly deploy the environment. We assumed that you have installed the NVIDIA GPU driver and nvidia-docker. You can enable all the features following the commands below:

git clone --recursive https://github.com/NVIDIA/FFmpeg-GPU-Demo.git
docker pull nvcr.io/nvidia/pytorch:22.03-py3
cd ffmpeg-gpu-demo
docker build -t ffmpeg-gpu-demo:22.03-py3 --build-arg TAG=22.03-py3 .
docker run --gpus all -it --rm -e NVIDIA_DRIVER_CAPABILITIES=all -v $(pwd):/workspace/ffmpeg-gpu-demo ffmpeg-gpu-demo:22.03-py3
cd ffmpeg-gpu-demo/ffmpeg-gpu/
bash config_ffmpeg_libtorch.sh
make -j10
make install

If you just want a specific feature, please refer to the feature's doc for simplified building. We will provide a complete docker image in the future, so that you can pull & run directly.

Our project provides a AI+graphics pipeline in FFmpeg, as shown in the GIF above. Sample command:

ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i <input> -vf scale_npp=1280:720,pose="./img2pose_v1_ft_300w_lp_static_nopost.onnx":8,format_cuda=rgbpf32,tensorrt="./ESRGAN_x4_dynamic.trt",format_cuda=nv12 -c:v h264_nvenc <output>

Please refer to the pose filter doc for how to run the pipeline.

Additional Resources

If you are interested in the tech details of our project, check out our GTC 2022 talk: AI-based Cloud Rendering: Full-GPU Pipeline in FFmpeg

FFmpeg GPU Demo is first developed by NVIDIA DevTech & SA team, and currently maintained by Xiaowei Wang. Authors include Yiming Liu, Jinzhong(Thor) Wu and Xiaowei Wang.

FFmpeg GPU Demo is under MIT license, check out the LICENSE for details.

About

A FFmpeg based demo showing GPU's all-round capability in video processing

License:Other


Languages

Language:C 91.3%Language:Assembly 6.4%Language:Makefile 1.3%Language:C++ 0.5%Language:Cuda 0.2%Language:Objective-C 0.1%Language:Shell 0.1%Language:Perl 0.1%Language:Python 0.0%Language:CSS 0.0%Language:Awk 0.0%Language:HTML 0.0%Language:Ruby 0.0%Language:Dockerfile 0.0%Language:Coq 0.0%Language:Roff 0.0%