L.Ma's repositories
996.ICU
Repo for counting stars and contributing. Press F to pay respect to glorious developers.
Chinese-BERT-wwm
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
CLIP
Contrastive Language-Image Pretraining
colight
CoLight: Learning Network-level Cooperation for Traffic Signal Control
Commercial-98K
The Commercial-98K contains video, audio, text and emotion features of 98,071 advertisements.
CS-Xmind-Note
计算机专业课(408)思维导图和笔记:计算机组成原理(第五版 王爱英),数据结构(王道),计算机网络(第七版 谢希仁),操作系统(第四版 汤小丹)
eat_pyspark_in_10_days
pyspark🍒🥭 is delicious,just eat it!😋😋
eat_tensorflow2_in_30_days
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋
EEG-Transformer-seq2seq
Modified transformer network utilizing the attention mechanism for time series or any other numerical data. 6.100 project at MIT Media Lab.
fucking-algorithm
刷算法全靠套路,认准 labuladong 就够了!English version supported! Crack LeetCode, not only how, but also why.
google-research
Google Research
incubator-tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
LeetCode-Go
✅ Solutions to LeetCode by Go, 100% test coverage, runtime beats 100% / LeetCode 题解
lifeRestart
やり直すんだ。そして、次はうまくやる。
MachineLearning_Python
机器学习算法python实现
MIL-NCE_HowTo100M
PyTorch GPU distributed training code for MIL-NCE HowTo100M
pyGAT
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
PyTorch-VAE
A Collection of Variational Autoencoders (VAE) in PyTorch.
X-VLM
X-VLM: Multi-Grained Vision Language Pre-Training