will's repositories
xvshiting.github.io
homePage
whisper
Robust Speech Recognition via Large-Scale Weak Supervision
FasterTransformer
Transformer related optimization, including BERT, GPT
Algorithm_Design_and_Analysis_Course_HomeWork
Algorithm Design and Analysis Homework Submit Repo
pdfminer.six
Community maintained fork of pdfminer - we fathom PDF
yolov5
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
uvadlc_notebooks
Repository of Jupyter notebook tutorials for teaching the Deep Learning Course at the University of Amsterdam (MSc AI), Fall 2021/Spring 2022
graphqa
Protein quality assessment using Graph Convolutional Networks
adapter-transformers
Huggingface Transformers + Adapters = ❤️
Pretrained-Language-Model
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.
Greedy-snake
greedy snake In shell. Writing for fun.
doccano
Open source annotation tool for machine learning practitioners.
rebiber
A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).
DeBERTa
The implementation of DeBERTa
neuspell
NeuSpell: A Neural Spelling Correction Toolkit
tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
errant
ERRor ANnotation Toolkit: Automatically extract and classify grammatical errors in parallel original and corrected sentences.
gector
Official implementation of the paper “GECToR – Grammatical Error Correction: Tag, Not Rewrite” // Published on BEA15 Workshop (co-located with ACL 2020) https://www.aclweb.org/anthology/2020.bea-1.16.pdf
ReCO
ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion
PIE
Fast + Non-Autoregressive Grammatical Error Correction using BERT. Code and Pre-trained models for paper "Parallel Iterative Edit Models for Local Sequence Transduction": www.aclweb.org/anthology/D19-1435.pdf (EMNLP-IJCNLP 2019)
c3
Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
bert
TensorFlow code and pre-trained models for BERT
pytorch-transformers
👾 A library of state-of-the-art pretrained models for Natural Language Processing (NLP)