allenai / allennlp

An open-source NLP research library, built on PyTorch.

Home Page:http://www.allennlp.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Details and results of implementing BERT and SpanBERT for coreference resolution?

gleb-skobinsky opened this issue · comments

Hi!
What are the results of the BERT-based implementations of the coref model on CONLL-2012 shared task?
The .jsonnet config file of the coreference resolution model specifies the "Higher-Order Coreference Resolution with Coarse-to-Fine Inference" model. However, in the paper, the results only for the model with ELMO embeddings were published. It seems like coref-spanbert config is close to the 2019 model by Joshi (https://github.com/mandarjoshi90/coref), but is its performance the same as in his implementation?
Finally, the coref-bert-lstm config uses recurrent LSTM-layer over BERT embeddings, which is different from the Joshi approach. Are there any results published for this particular implementation?
I'm using these coref-configs for training a coreference model for another language (not English) since its very robust and clean, but applying it instead of later models (like Joshi or Coref-QA) needs substantiation, especially for academia. I urgently need that substantiation, please help :)

@kot-ne-kod-77 This PR will add coref results on Conll-2012 using the spanbert large model, which will be seen here.

Thanks so much!