lizhemin15 / suggested-notation-for-machine-learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Suggested Notation for Machine Learning

This introduces a suggestion of mathematical notation protocol for machine learning.

The field of machine learning is evolving rapidly in recent years. Communication between different researchers and research groups becomes increasingly important. A key challenge for communication arises from inconsistent notation usages among different papers. This proposal suggests a standard for commonly used mathematical notation for machine learning. In this first version, only some notation are mentioned and more notation are left to be done. This proposal will be regularly updated based on the progress of the field. We look forward to more suggestions to improve this proposal in future versions.

Tabel of Contents

Dataset

Dataset is sampled from a distribution over a domain .

  • is the instances domain (a set)
  • is the label domain (a set)
  • is the example domain (a set)

Usually, is a subset of and is a subset of , where is the input dimension, is the ouput dimension.

is the number of samples. Wihout specification, and are for the training set.

Function

Hypothesis space is denoted by . Hypothesis function is denoted by or with .

denotes the set of parameters of .

If there exists a target function, it is denoted by or satisfying for .

Loss function

Loss function, denoted by measures the difference between a predicted label and a true label, e.g.,

  • loss: , where . can also be written as for convenience.

Empirical risk or training loss for a set is denoted by or or or ,

Without ambiguity, is also used for .

The population risk or expected loss is denoted by

where follows the distribution .

Activation function

Activation function is denoted by .

Example 1. Some commonly used activation functions are

Two-layer neural network

The neuron number of the hidden layer is denoted by , The two-layer neural network is

where is the activation function, is the input weight, is the output weight, is the bias term. We denote the set of parameters by

General deep neural network

The counting of the layer number excludes the input layer. A -layer neural network is denoted by

where , , , , is a scalar function and "" means entry-wise operation. We denote the set of parameters by

This can also be defined recursively,

Complexity

The VC-dimension of a hypothesis class is denoted as VCdim().

The Rademacher complexity of a hypothesis space on a sample set is denoted by or . The complexity is random because of the randomness of . The expectation of the empirical Rademacher complexity over all samples of size is denoted by

Training

The Gradient Descent is oftern denoted by GD. THe Stochastic Gradient Descent is often denoted by SGD.

A batch set is denoted by and the batch size is denoted by .

The learning rate is denoted by .

Fourier Frequency

The discretized frequency is denoted by , and the continuous frequency is denoted by .

Convolution

The convolution operation is denoted by .

Notation table

symbol meaning Latex simplied
input \bm{x} \mathbf{x}
output, label \bm{y} \vy
input dimension d
output dimension d_{\rm o}
number of samples n
instances domain (a set) \mathcal{X} \fX
labels domain (a set) \mathcal{Y} \fY
example domain \mathcal{Z} \fZ
hypothesis space (a set) \mathcal{H} \mathcal{H}
a set of parameters \bm{\theta} \mathbf{\theta}
hypothesis function \f_{\bm{\theta}} f_{\mathbf{\theta}}
or target function f,f^*
loss function \ell
distribution of \mathcal{D} \fD
sample set
, , , empirical risk or training loss
population risk or expected loss
activation function \sigma
input weight \bm{w}_j \mathbf{w}_j
output weight a_j
bias term b_j
or neural network f_{\bm{\theta}} f_{\mathbf{\theta}}
two-layer neural network
) VC-dimension of
, Rademacher complexity of on
Rademacher complexity over samples of size
gradient descent
stochastic gradient descent
a batch set B
batch size b
learning rate \eta
discretized frequency \bm{k} \mathbf{k}
continuous frequency \bm{\xi} \mathbf{x}i
convolution operation *

L-layer neural network

symbol meaning Latex simplied
input dimension d
output dimension d_{\rm o}
the number of -th layer neuron, , m_l
the -th layer weight \bm{W}^{[l]} \mathbf{W}^{[l]}
the -th layer bias term \bm{b}^{[l]} \mathbf{b}^{[l]}
entry-wise operation \circ
activation function \sigma
, parameters \bm{\theta} \mathbf{\theta}
, -th layer output
, -layer NN

Acknowledgements

Chenglong Bao (Tsinghua), Zhengdao Chen (NYU), Bin Dong (Peking), Weinan E (Princeton), Quanquan Gu (UCLA), Kaizhu Huang (XJTLU), Shi Jin (SJTU), Jian Li (Tsinghua), Lei Li (SJTU), Tiejun Li (Peking), Zhenguo Li (Huawei), Zhemin Li (NUDT), Shaobo Lin (XJTU), Ziqi Liu (CSRC), Zichao Long (Peking), Chao Ma (Princeton), Chao Ma (SJTU), Yuheng Ma (WHU), Dengyu Meng (XJTU), Wang Miao (Peking), Pingbing Ming (CAS), Zuoqiang Shi (Tsinghua), Jihong Wang (CSRC), Liwei Wang (Peking), Bican Xia (Peking), Zhouwang Yang (USTC), Haijun Yu (CAS), Yang Yuan (Tsinghua), Cheng Zhang (Peking), Lulu Zhang (SJTU), Jiwei Zhang (WHU), Pingwen Zhang (Peking), Xiaoqun Zhang (SJTU), Chengchao Zhao (CSRC), Zhanxing Zhu (Peking), Chuan Zhou (CAS), Xiang Zhou (cityU).

About


Languages

Language:TeX 100.0%