The individual implementations of various algorithms in ML without any external libraries except NumPy.
You can use my code without any limits.
Algorithm design refers to Keras and TensorFlow. The construction way of the NN only supports Function API.
nn_input = Input(shape=(x_train.shape[1]))
x = Dense(1024, activation='relu')(nn_input)
x = BatchNormalization()(x)
x = Dense(512, activation='relu')(x)
x = BatchNormalization()(x)
nn_output = Dense(10, activation='softmax')(x)
- Activation
Abstract class, which is the base class of all activation classes
- Including Indentity, Softmax, Sigmoid, tanh, ReLU, LeakyReLU
- Initializer
Abstract class, which is the base class of all initializer classes
- Including Zeros, Ones, RandomUniform, RandomNormal
- Layer
Abstract class, which is the base class of all layers
- Input
The input of the nerual network
- Dense
Namely, fully connected layer
- BatchNormalization
- Dropout (Not implemented)
- LossFunction
Abstract class, which is the base class of all loss functions
- CategoricalCrossentropy
- SparseCategoricalCrossentropy (Not implemented)
- MeanSquaredError (Not implemented)
- Metric
Abstract class, which is the base class of all metrics
- Accuracy
- Optimizer
Abstract class, which is the base class of all optimizers
- SGD
Stochastic Gradient Descent, momentum and Nestrov is not supported yet
- Model
compile -> fit -> score/predict (not implemented)
The parameter 'batch_size' is kind of meaningless of CPU training but just for fun :D.
model = Model(inputs=nn_input, outputs=nn_output)
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics='acc')
model.fit(x_train, y_train, validation_data=(x_valid, y_valid), batch_size=128)
- to_one_hot function
Convert numpy.ndarray type to the one-hot form