Generalized Operational Neural Networks - Part 2
Dat Thanh Tran, Serkan Kiranyaz, Alexandros Iosifidis, Moncef Gabbouj
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 51:52
Recent advances in machine learning led to remarkable solutions in many research problems, notably in fields of computer vision, natural language processing and games. This is due to new parallel computing capabilities for scientific computing (Graphical Processing Units–GPUs and distributed computing), the availability of enormous sets of annotated data, and methodological contributions in a family of models called artificial neural networks, and commonly referred to as Deep Learning. Such state-of-the-art models are commonly formed by an enormous number (in the order of hundreds of thousand, or even millions) of parameters which are jointly tuned in an end-to-end optimization process to fit the training data. However, most of research and practical implementations of these powerful learning models exploit a simplistic model for the artificial neuron, while the topologies used for different tasks are manually optimized based on laborious experimentation requiring expert knowledge (either human expert or expert systems) leading to extremely high computational optimization processes.
Very recently, the Generalized Operational Perceptron (GOP) was proposed leading to a generalized family of artificial neural networks. GOPs are feedforward neural network architectures that exploit more sophisticated types of artificial neurons for efficiently capturing different types of data transformations and nonlinearities appearing in different problems. A number of methodologies have been also proposed for automatically determining the network’s topology, as well as the type of the individual neurons forming each layer of the network. Extensions using skip (or memory) connections and weighted optimization schemes for handling imbalanced classes have been also very recently proposed. Convergence analysis of GOPs and empirical evidence on a wide range of problems indicate that GOPs can achieve competitive (or higher) performance compared to conventional feedforward neural networks, while being more compact and efficient. Finally, the most recent ANN model, Operational Neural Networks (ONNs) that are derived from GOPs have achieved remarkable image processing capabilities such as image denoising, syntheses and transformation, that cannot be otherwise achieved by the conventional Convolutional Neural Nets (CNNs) without a “deep” architecture.
Very recently, the Generalized Operational Perceptron (GOP) was proposed leading to a generalized family of artificial neural networks. GOPs are feedforward neural network architectures that exploit more sophisticated types of artificial neurons for efficiently capturing different types of data transformations and nonlinearities appearing in different problems. A number of methodologies have been also proposed for automatically determining the network’s topology, as well as the type of the individual neurons forming each layer of the network. Extensions using skip (or memory) connections and weighted optimization schemes for handling imbalanced classes have been also very recently proposed. Convergence analysis of GOPs and empirical evidence on a wide range of problems indicate that GOPs can achieve competitive (or higher) performance compared to conventional feedforward neural networks, while being more compact and efficient. Finally, the most recent ANN model, Operational Neural Networks (ONNs) that are derived from GOPs have achieved remarkable image processing capabilities such as image denoising, syntheses and transformation, that cannot be otherwise achieved by the conventional Convolutional Neural Nets (CNNs) without a “deep” architecture.