w2v-cif-bert
Code for paper: Efficiently Fusing Pretrained Acoustic and Linguistic Encoders for Low-resource Speech Recognition
We only provide key files of our model, w2v-cif-bert, which can be reimplement based on fairseq. If you have any questions on the reimplementation, please consult yicheng2016@ia.ac.cn.