richardRadli / Multi_Phase_Deep_Random_Neural_Network

Implementation of paper: Rádli, R., & Czúni, L. (2023). Deep Randomized Networks for Fast Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi Phase Deep Random Neural Network Research Repository

Welcome to the Multi Phase Deep Random Neural Network Research Repository! This repository hosts code and resources related to tow articles focusing on extreme learning machines, their training and their modifications. Each article corresponds to a separate branch in this repository, and they are briefly introduced below:

1. Deep Randomized Networks for Fast Learning

Abstract:

Deep learning neural networks show a significant improvement over shallow ones in complex problems. Their main disadvantage is their memory requirements, the vanishing gradient problem, and the time consuming solutions to find the best achievable weights and other parameters. Since many applications (such as continuous learning) would need fast training, one possible solution is the application of sub-networks which can be trained very fast. Randomized single layer networks became very popular due to their fast optimization while their extensions, for more complex structures, could increase their prediction accuracy. In our paper we show a new approach to build deep neural models for classification tasks with an iterative, pseudo-inverse optimization technique. We compare the performance with a state-of-the-art backpropagation method and the best known randomized approach called hierarchical extreme learning machine. Computation time and prediction accuracy are evaluated on 12 benchmark datasets, showing that our approach is competitive in many cases.

Branch: lion17

2.

Abstract:

Branch: journal

About

Implementation of paper: Rádli, R., & Czúni, L. (2023). Deep Randomized Networks for Fast Learning