I am interested in human-like reasoning and the “third wave” of artificial intelligence. Humans are capable of finding patterns in remarkably small datasets, learning from just a handful of examples. We use a wide variety of strategies to solve a wide variety of problems. Truly intelligent systems should be able to do likewise. Third-wave AI focuses on systems that can not just make predictions, but form explanations. This goes far beyond the first-wave (GOFAI, or good old fashioned AI) and second-wave (deep learning) AI. Furthermore, some speculate that third-wave AI might focus on teaching the AIs how to learn, or meta-learning.
Simulating neuromorphic reservoir computing: Abstract feed-forward hardware models
Recent developments of unconventional hardware using memristors and atomic switch networks has led to renewed interest in hardware neuromorphic solutions. Most hardware models rely upon a reservoir neural network as the basis of any learning, but the distinct differences between software implementations and hardware reality mean what we take for granted in traditional software reservoirs - such as cycles, loops, infinite energy, and discrete time - may be severely limited or unavailable in hardware, raising questions about how a hardware implementation would perform and how to potentially overcome these limitations. Proposed hardware additions, such as an echoer or an input delay mechanism, address some of these limitations.
Restricted Echo State Networks
Echo state networks are a powerful type of reservoir neural network, but the reservoir is essentially unrestricted in its original formulation. Motivated by limitations in neuromorphic hardware, we remove combinations of the four sources of memory—leaking, loops, cycles, and discrete time—to determine how these influence the suitability of the reservoir. We show that loops and cycles can replicate each other, while discrete time is a necessity. The potential limitation of energy conservation is equivalent to limiting the spectral radius.
Neuromorphic Computing with Reservoir Neural Networks on Memristive Hardware
Building an artificial brain is a goal as old as computer science. Neuromorphic computing takes this in new directions by attempting to physically simulate the human brain. In 2008 this goal received renewed interest due to the memristor, a resistor that has state, and again in 2012 with the atomic switch, a related circuit component. This report details the construction of a simulator for large networks of these devices, including the underlying assumptions and how we model specific physical characteristics. Existing simulations of neuromorphic hardware range from detailed particle-level simulations through to high-level graph-theoretic representations. We develop a simulator that sits in the middle, successfully removing expensive and unnecessary operations from particle simulators while remaining more device-accurate than a wholly abstract representation. We achieve this with a statistical approach, describing distributions from which we draw the ideal values based on a small set of parameters. This report also explores the applications of these memristive networks in machine learning using reservoir neural networks, and their performance in comparison to existing techniques such as echo state networks (ESNs). Neither the memristor nor atomic switch networks are capable of learning time-series sequences, and the underlying cause is found to be restrictions imposed by physical laws upon circuits. We present a series of restrictions upon an ESN, systematically removing loops, cycles, discrete time, and combinations of these three factors. From this we conclude that removing loops and cycles breaks the “infinite memory” of an ESN, and removing all three renders the reservoir totally incapable of learning.
I infrequently put projects on GitHub, but you are welcome to view what is available there: Aaron Stockdill on GitHub.