google-research-datasets / recognizing-multimodal-entailment

The dataset consists of public social media url pairs and the corresponding entailment label for an external conference (ACL 2021). Each url contains a post with both linguistic (text) and visual (image) content. Entailment labels are human annotated through Google Crowdsource.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Recognizing Multimodal Entailment (ACL'2021)

The Recognizing Multimodal Entailment tutorial was held virtually at ACL-IJCNLP 2021 on August 1st.

It gives an overview of multimodal learning, introduces a multimodal entailment dataset, and encourages future research in the topic. For more information, https://multimodal-entailment.github.io/

A baseline model authored by Sayak Paul for this dataset is available on Keras.io, with the accompanying repository.

Example of Multimodal Entailment

Example of multimodal entailment where texts or images alone would not suffice for semantic understanding or pairwise classifications.

About

The dataset consists of public social media url pairs and the corresponding entailment label for an external conference (ACL 2021). Each url contains a post with both linguistic (text) and visual (image) content. Entailment labels are human annotated through Google Crowdsource.

License:Other


Languages

Language:Jupyter Notebook 100.0%