Reconstructing Spiking Neural Networks Using a Single Neuron with Autapses
Abstract
Spiking neural networks (SNNs) are well known for their high energy efficiency and strong temporal processing capabilities. However, the multilayer architectures of SNNs often incur substantial costs of communication, computation, and storage capacity. Inspired by biological autapses, we develop a simple yet effective framework for reconstructing spiking neural networks using a single neuron with time-delayed autapses (TDA-SNN), integrating a dedicated prototype learning-based optimization method. This design allows a single spiking neuron to dynamically reconfigure its internal temporal states, effectively emulating large-scale architectures such as reservoirs, multilayer perceptrons, and convolutional layers while maintaining efficient learning. Extensive experiments on sequential, event-stream, and image datasets demonstrate that TDA-SNN achieves performance comparable to deep SNNs, while significantly reducing computational overhead and enhancing internal information storage capacity. These results highlight the potential of single-neuron models as compact and efficient computational units, offering new insights into the development of biologically inspired neuromorphic systems.