A script pipeline that processes reaction videos to a particular song so that all reactions can be put together into a single combined video (called a reaction concert).
Watch some Reaction Concerts on the Resound Youtube channel.
Read some background on the overall project motivation.
The main parts are:
- Alignment Identifying when each reactor first encounters each part of the song. This enables creating a stripped down version of the reaction that is aligned perfectly with the song.
- Facial recognition and gaze tracking Identifying unique faces in the reaction video so we can crop to the reactors. Also hypothesizes about the dominant gaze, for use later in the compositor.
- Backchannel isolation Identifying when a reactor is saying something (or hooting!), and isolating just that sound.
- Composing and audio mixing Creating a hexagonal grid, placing the song video, assigning a hex grid to each reactor based on dominant gaze, mixing / mastering audio including stereo panning based on grid position, and outputing the final video.
Some incomplete installation notes.
Created by Travis Kriplean.