Automatic Video Lecture Summarization with Injection of Multimodal Information: Two Novel Datasets and a New Approach
Enrico Castelli's Master's Thesis
Note: this repository is a placeholder. The actual contents (code, datasets, and models) will be published here in the future.
With the growing diffusion of online courses with video lectures, both from universities such as PoliTo and from MOOC platforms, the ability to distill key information is becoming more and more quintessential to the life of a student. Video lectures provide their contents in a multimodal way, not only with the voice of the speaker, which can be transcribed, but also with visual information such as writings on a blackboard or projected slides. The aim of this work is to offer a new tool to learners and teachers that will allow them to supply one of the proposed models with the transcript of a video lecture and obtain its short summary in return in a fully automatic way. To train our Transformer-based models, we build two datasets from scratch: OpenULTD, a university lecture and public talk transcripts dataset, and UniSum, a transcript-summary dataset of university lectures from sixtyseven courses offered at MIT and Yale, which we also extend leveraging the lectures’ visual information.
Find the PDF on PoliTo's website: http://webthesis.biblio.polito.it/id/eprint/26717.