qianrenjian / papernotclear

蜻蜓点论文 Think不Clear, 论文解读视频上传B站, youtube, 西瓜视频(同步到抖音)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

深度学习论文

蜻蜓点论文, paperskim, deep learning papers. the table of content of all my videos

  • 方法类的论文, 不保证方法理解的正确和准确
  • 架构和方法类的论文, 不深挖背后的数学**
  • 实验部分 或 实验类的论文, 基本上简单翻译

如有问题, 欢迎提问;如有质疑, 欢迎右上(手机好像是左下和左上?)

B站Think不Clear Youtube(PaperThinkNotClear)

西瓜视频 因标题字数限制,我基本都改成中文名了, 所以, 我自己都没法搜索到目标视频

幻灯片 Slides on OneDrive 没有同步, 因为需要同时存 百度和onedrive, 怎么存呢? 只能是懒得管了

百度网盘 baidu pan 我搞错了

链接 https://pan.baidu.com/s/1e3lh08SE6mKg3E7loktQUA 提取码:e0gn

同样可用 链接:https://pan.baidu.com/s/1fTQnIGhQ3hcvjlDrM4NNFA 提取码:ks3c

论文名 Bilibili Youtube Arxiv 博客 序号
239
238
237
StyleGAN2: Analyzing and Improving the Image Quality of StyleGAN 236
DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort 235
StyleGAN: A Style-Based Generator Architecture for GAN 234
AdaIN:Arbitrary Style Transfer in Real-time with Adaptive Instance Norm 233
A learned representation for artistic style 232
Cold Diffusion Inverting Arbitrary Image Transforms Without Noise 231
EnergyMatch: Energy-based Pseudo-Labeling for Semi-Supervised 230
Knowledge distillation A good teacher is patient and consistent 229
Understanding Attention for Vision-and-Language Tasks 228
Efficient Training of Visual Transformers with Small Datasets 227
Convolutional Knowledge Tracing Individualization in Learning Process 226
Educational Question Mining At Scale Prediction, Analysis and Personal 225
EDDI Dynamic Discovery of High-Value Information with Partial VAE 224
Diffusion Models Beat GANs on Image Synthesis 223
Accelerating Diffusion Models via Early Stop of the Diffusion Process 222
Improved Denoising Diffusion Probabilistic Models 221
UserBERT contrastive and self-supervised 220
An Empirical Study of Train End-to-End Vision-and-Language Transformer 219
多模态预训练 VLMo 和 VL-BEIT 218
Unsupervised Vision-and-Language Pre-train wo Parallel Images captions 217
ViLT Vision-and-Language Transformer Without Convolution or Region 216
Parameter-Efficient Transfer Learning for NLP 215
Sharpness-Aware Training for Free 214
When Vision Transformers Outperform ResNets wo Pre-training Strong DA 213
SLViT: Vision Transformer for Small-Size Datasets 212
CCT: Escaping the Big Data Paradigm with Compact Transformers 211
Self-supervised Graph Learning for Recommendation 210
LightGCN Simplifying Graph Convolution Network for Recommendation 209
MiniRocket: Very Fast Deterministic Time Series Classification 208
Simplifying Graph Convolutional Networks 207
Deep Generative prior for Image Restoration and Manipulation 206
GCN semi-supervised classification with graph conv networks youtube 205
Improved Trainable Calibration Method on Medical Imaging Classification 204
Graph Attention Networks 203
Anomaly Transformer: Time Series Anomaly Detection 202
[RandAugument and Cutout](https://www.bilibili.com/video/BV1V5411d7D3"【凑个数】201 RandAugument and Cutout") 201
Crafting Better Contrastive Views for Siamese Representation Learning 200
Efficient Sharpness-aware Minimization for Improve Neural Networks 198
Improved Contrastive Divergence Training of Energy-Based Model 199
The Effects of Regularization and Data Augmentation are Class Dependent 197
Tradeoffs in Data Augmentation An Empirical Study 196
Visual Prompting: Modifying Pixel Space to Adapt Pre-trained Models 195
SPICE Semantic Pseudo-Labeling for Image Clustering 194
Sharpness-Aware Minimization for Efficiently Improving Generalization 194
DeCLUTR Contrastive Learning for Unsupervised Textual Representation 192
Revisiting the Transferability of Supervised Pretraining via an MLP 191
CoMatch: Semi-supervised Learning with Contrastive Graph Regularization 190
Event Extraction by Answering (Almost) Natural Questions 189
You never cluster Alone You Never Cluster Alone 1.1 188
Complement Objective Training 187
Well-classified Examples are Underestimated in Classification with DNN 186
When Does Label Smoothing Help 185
SEED: Self-supervised Distillation For Visual Representation 184
MixText: Hidden Space MixUp for Semi-Supervised Text Classification 183
Nearest Neighbor Matching for Deep Clustering 182
Conditional Self-Supervised Learning for Few-Shot Classification 181
Active Learning at the ImageNet Scale 180
Towards Understand Generative Capability Adversarial Robust Classifier 179
FlexMatch Semi-Supervised Learning with Curriculum Pseudo Labeling 178
Learning Energy-Based Models by Diffusion Recovery Likelihood 177
Self-Knowledge Distillation with Progressive Refinement of Targets 176
AEDA: An Easier Data Augmentation Technique for Text Classification 175
VAEBM: Variational Autoencoders and Energy-based Models 174
On Separability of Self-Supervised Representations 173
Revisiting Knowledge Distillation via Label Smoothing Regularization 172
Regularizing Class-wise Predictions via Self-knowledge Distillation 171
Be Your Own Teacher: Improve CNN via Self Distillation 170
以下删除过
Bayesian Deep Learning and a Probabilistic Perspective of Generalization 169
Rethink Image Mixture for Unsupervised Visual Representation Learning 168
FixMatch: Semi-Supervised Learning with Consistency and Confidence 167
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring 166
MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering 165
Learn Representation via Information Maximizing Self-Augmented Training 164
Supporting Clustering with Contrastive Learning 163
Unsupervised Multi-hop Question Answering by Question Generation 162
Perceiver: General Perception with Iterative Attention 161
Joint EBM Training for Better Calibrated NLU Models 160
A Unified Energy-Based Framework for Unsupervised Learning 159
Energy-Based Models for Deep Probabilistic Regression 158
Contrastive Learning Inverts the Data Generating Process 157
Asymmetric Loss For Multi-Label Classification 156
Computation-Efficient Knowledge Distillation by Uncertainty-Aware Mixup 155
Knowledge Distillation Meets Self-Supervision 154
Feature Projection for Improved Text Classification 153
Improve Joint Train of Inference Net and Structure Predict EnergyNet 152
BertGCN: Transductive Text Classification by Combining GCN and Bert 151
The Authors Matter Understand Mitigate Implicit Bias in text classification 150
Learning Approximate Inference Networks for Structured Prediction 149
End-to-End Learning for Structured Prediction Energy Networks 148
Revisiting Unsupervised Relation Extraction 147
Sentence Meta-Embeddings for Unsupervised Semantic Textual Similarity 146
X-Class: Text Classification with Extremely Weak Supervision 145
Paint by Word 144
Shape-Texture Debiased Neural Network Training 143
Contrastive Learning through Alignment and Uniformity on the Hypersphere 142
Deep INFOMAX representation mutual information estimation maximization 141
SimCSE: Simple Contrastive Learning of Sentence Embeddings 140
IMOJIE Iterative Memory-Based Joint Open Information Extraction 139
Trash is Treasure Resisting Adversarial Examples by Adversarial Examples 138
Enhancing Adversarial Defense by k-Winners-Take-All 137
On Adaptive Attacks to Adversarial Example Defenses 136
Knowledge distillation via softmax regression representation learning 135
Revisiting Locally Supervised Learning Alternative to End-to-end Training 134
Putting An End to End-to-End Gradient-Isolated Learning of Representations 133
防御defense变分自编码器 132
Triple Wins Accuracy Robustness Efficiency by Input-adaptive Inference 131
Using latent space regression to analyze leverage compositionality in GANs 130
Theoretically(没看) Principled Trade-off between Robustness and Accuracy 129
Representation learning with contrastive predictive coding 128
Learning Representations for Time Series Clustering 127
Stochastic Security: Adversarial Defense Using Long-Run Dynamics of EBM 126
Improving Adversarial Robustness via Channel-wise Activation Suppressing 125
Likelihood Landscapes: A Unifying Principle Behind Adversarial Defenses 124
Barlow Twins: Self-Supervised Learning via Redundancy Reduction 123
Geometry-Aware Instance-Reweighted Adversarial Training 122
A Closer Look at Accuracy vs Robustness 121
Unsupervised Clustering of Seismic Signals 地震波 using autoencoders 120
Towards the first adversarially robust neural network model on MNIST 119
PGD对抗训练 Towards Deep Learning Models Resistant to Adversarial Attacks 118
Denoising Diffusion Probabilistic Models 117
Deep Unsupervised Learning using Nonequilibrium Thermodynamics 116
Variational Inference with Normalizing Flows 115
CutMix Regularization Strategy with Localizable Features 114
Clustering-friendly Representation Learning Feature Decorrelate 113
Energy-based Out-of-distribution Detection 112
High-Performance Large-Scale Image Recognition Without Normalization 111
Characterizing signal propagation in unnormalized ResNets 110
Concept Learners for Few-Shot Learning 109
Image Generation by Minimize Frechet Distance in Discriminator feature space 108
Learning Non-Convergent Non-Persistent Short-Run MCMC to EBM 107
Concept Whitening for Interpretable Image Recognition 106
Loss Landscape Sightseeing with Multi-Point Optimization 105
Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs 104
Essentially No Barriers in Neural Network Energy Landscape 103
Visualizing the Loss Landscape of Neural Nets 102
Self-training for Few-shot Transfer Across Extreme Task Differences 101
Darts: Differentiable architecture search 100
Architecture Search Space in Neural Architecture Search(NAS) 99
Free Lunch for Few-shot Learning: Distribution Calibration 98
Online Deep Clustering for Unsupervised Representation Learning 97
Coarse-to-Fine Pre-training for Named Entity Recognition 96
Unsupervised Domain Adaptation with Variational Information Bottleneck 95
A Unified MRC Framework for Named Entity Recognition 94
Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness 93
Super-Convergence: Very Fast Training of NN use large LR 92
Contrastive Clustering 91
Graph Contrastive Learning with Augmentations NIPS 2020 / Graph Contrastive Learning with Adaptive Augmentation WWW 2021 GCC: Graph Contrastive Coding for Graph Neural Network ... KDD 2021 GNN and Contrastive Learning 90
Contrastive Representation Distillation 89
Spectral Norm Regularization for Improving the Generalizability of NN 88
UDA: Unsupervised Data Augmentation for Consistency Training 87
Simplify the Usage of Lexicon in Chinese NER 86
Chinese NER Using Lattice LSTM 85
Uncertainty-aware Self-training for Few-shot Text Classification 84
Concept Learning with Energy-Based Models 83
Training data-efficient image transformers distillation through attention 82
Adversarial Training Methods for Semi-Supervised Text Classification 81
Delta-training Semi-Supervised Text Classification with word embedding 80
On the Anatomy of MCMC-Based Maximum Likelihood Learning of EBMs 79
Unsupervised Deep Embedding for Clustering Analysis 78
Relation of Relation Learning Network for Sentence Semantic Matching 77
Contextual Parameter Generation for Universal Neural Machine Translation 76
Exploring Simple Siamese Representation Learning 75
Contextual Parameter Generation for Knowledge Graph Link Prediction 74
Robustness May Be at Odds with Accuracy 73
Learning with Multiplicative Perturbations 72
When Do Curricula Work? 71
Self-Supervised Contrastive Learning with Adversarial Examples 70
Supervised Contrastive Learning 69
A Note on the Inception Score and FID 68
Hierarchical Semantic Aggregation for Contrastive Representation Learning 67
Syntactic and Semantic-driven Learning for Open Information Extraction 66
Text Classification with Negative Supervision 65
CESI Canonicalizing Open Knowledge Bases by Embeddings and Side Information 64
CaRe: Open Knowledge Graph Embedding 63
No MCMC for me, Amortized sampling for fast and stable training of EBMs 62
Knowledge Graph Embedding Based Question Answering 61
VAT Virtual Adversarial Training for regularization semi-supervised learn 60
CNN-Generated Images Are Surprisingly Easy to Spot.. For Now 59
Graph Agreement Models for Semi-Supervised Learning 58
Be More with Less: Hypergraph Attention Networks for Inductive 文本分类 57
Text Level Graph Neural Network for Text Classification 56
Graph Convolutional Networks for Text Classification 55
Learning sparse neural networks through L0 regularization 54
BigGAN: Large Scale GAN Training for High Fidelity Natural Image Synthesis 53
On the steerability of generative adversarial networks 52
What Makes for Good Views for Contrastive Learning 51
Viewmaker Networks Learning Views for Unsupervised Representation Learning 50
Auto-Encoding Variational Bayes 49
Adversarial Examples Improve Image Recognition 48
Stochastic Weight Averaging for Generalization 47
There Are Many Consistent Explanations Of Unlabeled Data 46
Interpretable Convolutional Neural Networks 45
Understanding Black-box Predictions via Influence Functions 44
Adversarial Examples Are Not Bugs, They Are Features 43
You Only Propagate Once Accelerating AT via Maximal Principle 42
Text Classification Using Label Names Only A LM self-training way 41
10篇softmax,CrossEntropyLoss替代方法论文合集 40
Cyclical Stochastic Gradient MCMC and snapshot ensemble 39
Unsupervised Feature Learning via Non-Parametric Instance Discrimination 38
An Image is Worth 16x16 Words Transformers for Image Recognition at Scale 37
Training independent subnetworks for robust prediction 36
Active Learning for CNNs: A Core-Set Approach 35
SimCLR A Simple Framework for Contrastive Learning Visual Representation 34
UNITER: UNiversal Image-TExt Representation Learning 33
Image Synthesis with a Single (Robust) Classifier 32
Set Transformer A Framework for Attention-based Permutation-Invariant NN 31
Consistency Regularization in Semi-Supervised Learning 30
Did the model understand the question 29
Rethinking Feature Distribution for Loss Functions in Image Classification 28
Bootstrap your own latent: A new way to self supervised learning 27
Hybrid Discriminative-Generative Training via Contrastive Learning(EBMs) 26
A Multimodal Translation-Based Approach for Knowledge Graph Representation (ACL 2018) 25
Deep Bayesian Active Learning with Image Data (ICML 2017)
The power of ensembles for active learning in image classification (CVPR 2018)
24
SCAN Learnrnning to Classify Images without Labels (ECCV 2020) 23
Unsupervised Question Answering by Cloze Translation (ACL 2019) 22
Phrase-Based & Neural Unsupervised Machine Translation (EMNLP 2018) 21
MixUp as Locally Linear Out-Of-Manifold Regularization (AAAI 2019) 20
Manifold Mixup: Better Representations by Interpolating Hidden States ICML2019 19
Bag of Tricks for Image Classification with CNN 18
On Mixup Training Improved Calibration for DNN 17
BERT: Pre-training of Deep Bidirectional Transformers 16
Rationalizing Neural Predictions (EMNLP2016) 15
Attention is all you need, Transformer (NIPS 2017) 14
Learn To Pay Attention (ICLR 2018) 13
A Self-Training Method for MRC with Soft Evidence Extraction(ACL 2019) 12
Deep Fool(CVPR2016) 和 Deep Defense(NIPS 2018) 11
R-Trans RNN Transformer Network for 中文机器阅理解(IEEE-Access) 10
一系列Energy-based models 能量模型论文摘要简介 9
Implicit Generation and Modeling with EBM(NIPS 2019) 8
MixMatch A Holistic Approach to Semi-supervised Learning(NIPS 2019) 7
Obfuscated Gradients Give a False Sense of Security(ICML2017 best reward) 6
Explaining and Harnessing Adversarial Examples(ICLR 2015) 5
Imagenet-Trained CNNS are Biased Towards Texture(ICLR2018) 4
Momentum Contrast for Unsupervised Visual Representation Learning(CVPR2020) 3
Mixup: Beyond Empirical Risk Minimization(ICLR2018) 2
Your Classifier is secretely an Energy Based Model(ICLR 2019) 1(2020-08-19)

代码

视频名 Bilibili Youtube github 序号
【倒代码】FlexMatch Semi-Supervised Learning with Curriculum Pseudo Labeling 3
FlexMatch Semi-Supervised Learning with Curriculum Pseudo Labeling 2
【过代码】SCAN: Learning to Classify Images without Labels 1

Paper List

可解释性

  • Interpretable Convolutional Neural Networks
  • Understanding Black-box Predictions via Influence Functions

Semi-Supervised Learning 半监督学习

  • Regularization With Stochastic Transformations and Perturbations NIPS 2016
  • Temporal Ensembling ICLR 2017
  • Virtual Adversarial Training ICLR 2016
  • Mean teachers are better role models: weight-averaged consistency targets NIPS 2017
  • Realistic Evaluation of Deep Semi-SL NIPS 2018
  • Deep co-training, ECCV 2018
  • There Are Many Consistent Explanations Of Unlabeled Data why you should average ICLR 2019
  • MixMatch A Holistic Approach to Semi-supervised Learning (NIPS 2019)

Softmax CrossEntropy Variants 交叉熵变种

Active Learning 主动学习

  • Active Learning for CNNs: A Core-Set Approach
  • Deep Bayesian Active Learning with Image Data (ICML 2017)
  • The power of ensembles for active learning in image classification (CVPR 2018)

多模态

  • UNITER: UNiversal Image-TExt Representation Learning
  • A Multimodal Translation-Based Approach for Knowledge Graph Representation (ACL 2018)

Transfoormer

  • Attention is All you need (NIPS 2017)
  • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

稳健性 Robustness

  • Explaining and Harnessing Adversarial Examples (ICLR 2015)
  • Deep Fool (CVPR2016)
  • Deep Defense (NIPS 2018)
  • Obfuscated Gradients Give a False Sense of Security (ICML2017 best reward)
  • Adversarial Examples Are Not Bugs, They Are Features
  • Image Synthesis with a Single (Robust) Classifier
  • Adversarial Examples Improve Image Recognition
  • You Only Propagate Once Accelerating AT via Maximal Principle

QA/MRC 机器阅读理解和问答

  • R-Trans RNN Transformer Network for 中文机器阅读理解 (IEEE-Access)
  • A Self-Training Method for MRC with Soft Evidence Extraction (ACL 2019)
  • Did the model understand the question
  • Rationalizing Neural Predictions (EMNLP2016)

图神经网络

  • Graph Agreement Models for Semi-Supervised Learning
  • Be More with Less: Hypergraph Attention Networks for Inductive 文本分类
  • Text Level Graph Neural Network for Text Classification
  • Graph Convolutional Networks for Text Classification

生成模型

GAN 生成式对抗网络

  • BigGAN: Large Scale GAN Training for High Fidelity Natural Image Synthesis
  • On the steerability of generative adversarial networks

自编码器

  • Auto-Encoding Variational Bayes

分析

  • CNN-Generated Images Are Surprisingly Easy to Spot.. For Now

神经网络稀疏化

  • Learning sparse neural networks through L0 regularization

文本分类

  • Be More with Less: Hypergraph Attention Networks for Inductive 文本分类
  • Text Level Graph Neural Network for Text Classification
  • Graph Convolutional Networks for Text Classification

无监督学习 Unsupervised Learning

  • Unsupervised Question Answering by Cloze Translation (ACL 2019)
  • Phrase-Based & Neural Unsupervised Machine Translation (EMNLP 2018)
  • SCAN Learnrnning to Classify Images without Labels (ECCV 2020)
  • Text Classification Using Label Names Only A LM self-training way

Contrastive Learning 对比学习

  • Unsupervised Feature Learning via Non-Parametric Instance Discrimination
  • Momentum Contrast for Unsupervised Visual Representation Learning (CVPR2020)
  • SimCLR A Simple Framework for Contrastive Learning of Visual Representation
  • Bootstrap your own latent: A new way to self supervised learning
  • Hybrid Discriminative-Generative Training via Contrastive Learning(EBMs)
  • What Makes for Good Views for Contrastive Learning
  • Viewmaker Networks Learning Views for Unsupervised Representation Learning

EBM 能量模型

  • Implicit Generation and Modeling with EBM (NIPS 2019)
  • Your Classifier is secretely an Energy Based Model (ICLR 2019)
  • Hybrid Discriminative-Generative Training via Contrastive Learning(EBMs)

Tricks 技巧

  • Bag of Tricks for Image Classification with CNN

MixUp 数据融合

  • Mixup: Beyond Empirical Risk Minimization (ICLR2018)
  • MixUp as Locally Linear Out-Of-Manifold Regularization (AAAI 2019)
  • Manifold Mixup: Better Representations by Interpolating Hidden States ICML2019
  • On Mixup Training Improved Calibration

集成学习

  • Cyclical Stochastic Gradient MCMC
  • snapshot ensemble
  • Training independent subnetworks for robust prediction
  • Averaging Weights Leads to Wider Optima and Better Generalization Arxiv

其他

  • Set Transformer: A Framework for Attention-based Permutation-Invariant NN
  • Imagenet-Trained CNNS are Biased Towards Texture (ICLR2018)

数据增广Data Augumentation

About

蜻蜓点论文 Think不Clear, 论文解读视频上传B站, youtube, 西瓜视频(同步到抖音)