jerryji1993 / DNABERT

DNABERT: pre-trained Bidirectional Encoder Representations from Transformers model for DNA-language in genome

Home Page:https://doi.org/10.1093/bioinformatics/btab083

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

attention maps generated in pre-training stage or fine-turning stage

Shicheng-Guo opened this issue · comments

Dear Jerry,

Thank you so much for the great contribution to the filed to develop this awesome pretraining model. The manuscript doesn't explicitly mention whether the attention maps (DNABERT-viz) are generated during the pretraining stage or the fine-tuning stage. Could you please more some explicitly explanation.

Thanks.

Shicheng