-
CIS
IEEE Members: Free
Non-members: FreeLength: 01:19:10
Paul Werbos;
ABSTRACT: The neural network field has experienced three massive revolutions, starting from 1987, when IEEE held the first International Conference on Neural Networks (ICNN), leading NSF to create the Neuroengineering research program which I ran and expanded from 1988 to 2014. This first period of growth already saw a huge proliferation of important new applications in engineering, such as vehicle control, manufacturing and partnerships with biology; see Werbos, “Computational intelligence from AI to BI to NI,” in Independent Component Analyses, Compressive Sampling, Large Data Analyses (LDA), Neural Networks, Biosystems, and Nanoengineering XIII, vol. 9496, pp. 149-157. SPIE, 2015.
IEEE conferences were the primary intellectual center of the new technology, relying most on generalized backpropagation (including backproapagation over time and backpropagation for deep learning) and on a ladder of neural network control designs, rising up to “reinforcement learning” (aka adaptive critics, or adaptive dynamic programming.)
The second great revolution resulted from a paradigm shifting research program, COPN, which resulted from deep dialogue and voting across research program directors at NSF: National Science Foundation (2007), Emerging Frontiers in Research and Innovation 2008 (https://www.nsf.gov/pubs/2007/nsf07579/nsf07579.htm). In that program, I funded Andrew Ng and Yann LeCun to test neural networks on crucial benchmark challenges in AI and computer science. After they demonstrated to Google that neural networks could outperform classical methods in AI, Google announced a new product which set off a massive wave of “new AI” in industry and in computer science.In computer science, this added momentum to the movement for Artificial General Intelligence (AGI), which our reinforcement learning designs already aimed at. However, there are levels and levels of generality, even in “general intelligence” (https://arxiv.org/pdf/q-bio/0311006.pdf). We now speak of Reinforcement Learning and Approximate Dynamic Programming (RLADP). In WCCI 2014, held in Beijing, I presented details roadmaps of how to rise up from the most powerful methods popular in computer science even today, up to intelligence as general as that of the basic mammal brain. See From ADP to the brain: foundations, roadmap, challenges and research priorities. In 2014 International Joint Conference on Neural Networks (IJCNN) (pp. 107-111). IEEE (https://arxiv.org/pdf/1404.0554.pdf). This led to intense discussions at Tsinghua and at the NSF of China, which led to new research directions in China where there have been massive new applications beyond what most researchers in the west consider possible. (See http://1dddas.org/ for the diverse and fragmented communities in the west. Also do a patent search on Werbos for details of how to implement higher level classical AGI.) Werbos and Davis (2016) shows how this view of intelligence fits real-time data from rat brains better than older paradigms for brain modeling. This year, we have opened the door to a new revolution. Just as adaptive analog networks, neural networks, massively and provably open up capabilities beyond old sequential Turing machines, the quantum extension of RLADP offers power far beyond what the usual Quantum Turing Machines (invented by David Deutsch) can offer. It offers true Quantum AGI, which can multiply capabilities by orders of magnitude in and application domain which requires higher intelligence, such as observing the sky, management of complex power grids, and “quantum bromium” (hard cybersecurity). See “Quantum technology to expand soft computing.” Systems and Soft Computing 4 : 200031. https://www.sciencedirect.com/science/article/pii/S2772941922000011 and links on internet issues at build-a-world.org.