jiageohea / FunClip

A video clipping tool based on FunASR open-source models and Gradio || 一款基于FunASR高准确率开源语音识别模型的智能视频剪辑工具

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FunClip🎥

简体中文 | English」

FunClip is a fully open-source, locally deployed automated video editing tool. It leverages Alibaba DAMO Academy's open-source FunASR Paraformer series models to perform speech recognition on videos. Then, users can freely choose text segments or speakers from the recognition results and click the trim button to obtain the video corresponding to the selected segments (Quick Experience).

On top of the basic features mentioned above, FunClip has the following highlights:

  • FunClip integrates Alibaba's open-source industrial-grade model Paraformer-Large, which is one of the best-performing open-source Chinese ASR models available, with over 13 million downloads on Modelscope. It can also accurately predict timestamps in an integrated manner.
  • FunClip incorporates the hotword customization feature of SeACo-Paraformer, allowing users to specify certain entity words, names, etc., as hotwords during the ASR process to enhance recognition results.
  • FunClip integrates the CAM++ speaker recognition model, enabling users to use the auto-recognized speaker ID as the target for trimming, to clip segments from a specific speaker.
  • The functionalities are realized through Gradio interaction, offering simple installation and ease of use. It can also be deployed on a server and accessed via a browser.
  • FunClip supports multi-segment free clipping and automatically returns full video SRT subtitles and target segment SRT subtitles, offering a simple and convenient user experience.

You're welcome to try it out, and we look forward to any requests and valuable suggestions you may have about subtitle generation or speech recognition~

Support Us🌟

Star History Chart

What's New🚀

  • 2024/03/06 Fix bugs in using FunClip with command line.
  • 2024/02/28 FunASR is updated to 1.0 version, use FunASR1.0 and SeACo-Paraformer to conduct ASR wit hotword customization.
  • 2023/10/17 Fix bugs in multiple periods chosen, used to return video with wrong length.
  • 2023/10/10 FunClipper now supports recognizing with speaker diarization ability, choose 'yes' button in 'Recognize Speakers' and you will get recognition results with speaker id for each sentence. And then you can clip out the periods of one or some speakers (e.g. 'spk0' or 'spk0#spk3') using FunClipper.

Install🔨

Python env install

# clone funclip repo
git clone https://github.com/alibaba-damo-academy/FunClip.git
cd FunClip
# install Python requirments
pip install -r ./requirments.txt

imagemagick install (Optional)

If you want to clip video file with embedded subtitles

  1. ffmpeg and imagemagick is required
  • On Ubuntu
apt-get -y update && apt-get -y install ffmpeg imagemagick
sed -i 's/none/read,write/g' /etc/ImageMagick-6/policy.xml
  • On MacOS
brew install imagemagick
sed -i 's/none/read,write/g' /usr/local/Cellar/imagemagick/7.1.1-8_1/etc/ImageMagick-7/policy.xml 
  1. Download font file to funclip/font
wget https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ClipVideo/STHeitiMedium.ttc -O font/STHeitiMedium.ttc

Use FunClip

A. Use FunClip as local Gradio Service

You can establish your own FunClip service which is same as Modelscope Space as follow:

python funclip/launch.py

then visit localhost:7860 you will get a Gradio service like below and you can use FunClip following the steps:

  • Step1: Upload your video file (or try the example videos below)
  • Step2: Copy the text segments you need to 'Text to Clip'
  • Step3: Adjust subtitle settings (if needed)
  • Step4: Click 'Clip' or 'Clip and Generate Subtitles'

B. Experience FunClip in Modelscope

You can try FunClip in modelscope space: link.

C. Use FunClip in command line

FunClip supports you to recognize and clip with commands:

# step1: Recognize
python funclip/videoclipper.py --stage 1 \
                       --file examples/2022云栖大会_片段.mp4 \
                       --output_dir ./output
# now you can find recognition results and entire SRT file in ./output/
# step2: Clip
python funclip/videoclipper.py --stage 2 \
                       --file examples/2022云栖大会_片段.mp4 \
                       --output_dir ./output \
                       --dest_text '我们把它跟乡村振兴去结合起来,利用我们的设计的能力' \
                       --start_ost 0 \
                       --end_ost 100 \
                       --output_file './output/res.mp4'

On Going🌵

  • FunClip will support Whisper model for English users, coming soon.

Community Communication🍟

FunClip is firstly open-sourced bu FunASR team, any useful PR is welcomed.

You can also scan the following DingTalk group or WeChat group QR code to join the community group for communication.

DingTalk group WeChat group

Find Speech Models in FunASR

FunASR hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model released on ModelScope, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun!

📚FunASR Paper:

📚SeACo-Paraformer Paper:

🌟Support FunASR:

About

A video clipping tool based on FunASR open-source models and Gradio || 一款基于FunASR高准确率开源语音识别模型的智能视频剪辑工具

License:MIT License


Languages

Language:Shell 61.6%Language:Python 38.4%