Tutorial: Brain Inspired Spiking Neural Network Architecture for Deep, Incremental Learning and Knowledge Representation
Nikola Kasabov
-
CIS
IEEE Members: Free
Non-members: FreeLength: 01:20:52
The 2 hour tutorial demonstrates that the third generation of artificial neural networks, the spiking neural networks (SNN) are not only capable of deep, incremental learning of temporal or spatio-temporal data, but also enabling the extraction of knowledge representation from the learned data and tracing the knowledge evolution over time from the incoming data. Similarly to how the brain learns, these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles of the brain. The tutorial consists of 2 parts: 1. Algorithms for deep, incremental learning in SNN. 2. Algorithms for knowledge representation and for tracing the knowledge evolution in SNN over time from incoming data. Representing fuzzy spatio-temporal rules from SNN. The material is illustrated on an exemplar SNN architecture NeuCube (free software and open source along with a cloud-based version available from www.kedri.aut.ac.nz/neucube). Case studies are presented of brain and environmental data modelling and knowledge representation using incremental and transfer learning algorithms. These include: predictive modelling of EEG and fMRI data measuring cognitive processes and response to treatment; AD prediction; understanding depression; predicting environmental hazards and extreme events.