BrianTin's repositories
allennlp
An open-source NLP research library, built on PyTorch.
bert
添加文本相似度数据预处理TensorFlow code and pre-trained models for BERT
bi-att-flow
Bi-directional Attention Flow (BiDAF) network is a multi-stage hierarchical process that represents context at different levels of granularity and uses a bi-directional attention flow mechanism to achieve a query-aware context representation without early summarization.
CppPrimer
:books: Solutions for C++ Primer 5th exercises.
CSrankings
A web app for ranking computer science departments according to their research output in selective venues.
DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为18个章节,50余万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
JAPE
Joint Attribute-Preserving Embedding
knu_ci
code for character identification with knu_ci
learning_to_retrieve_reasoning_paths
The official implementation of ICLR 2020, "Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering".
LLMDataHub
A quick guide (especially) for trending instruction finetuning datasets
ltp
Language Technology Platform
MTransE
Code and data for IJCAI-17 paper Multilingual Knowledge Graph Embeddings for Cross-lingual Knowledge Alignment
Multimodal-Toolkit
Multimodal model for text and tabular data with HuggingFace transformers as building block for text data
nlp-beginner
NLP上手教程
NLP-progress
Repository to track the progress in Natural Language Processing (NLP), including the datasets and the current state-of-the-art for the most common NLP tasks.
NTU_ML2017_Hung-yi-Lee_HW
NTU ML2017 Spring and Fall Homework Hung-yi_Li 李宏毅老师 机器学习课程作业
OpenNMT-py
Open Source Neural Machine Translation in PyTorch
OpenNMT-tf
Neural machine translation and sequence learning using TensorFlow
tensorflow-handbook
简单粗暴 TensorFlow 2.0 | A Concise Handbook of TensorFlow 2.0
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
tutorials
机器学习相关教程
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs