AlainConnor / Depression-Detection-Through-Multi-Modal-Data

Conventionally depression detection was done through extensive clinical interviews, wherein the subject’s re- sponses are studied by the psychologist to determine his/her mental state. In our model, we try to imbibe this approach by fusing the 3 modalities i.e. word context, audio, and video and predict an output regarding the mental health of the patient. The output is divided into a binary yes/no denoting whether the patient has symptoms of depression. We’ve built a deep learning model that fuses these 3 modalities, assigning them appropriate weights, and thus gives an output.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AlainConnor/Depression-Detection-Through-Multi-Modal-Data Issues

No issues in this repository yet.