QianpengLi577 / SNN_arxiv_daily

this repository cord my subscriptions in arxiv with spiking neural network, and [this](https://github.com/shenhaibo123/SNN_summaries) is my summaries.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

543、制造脉冲网络:健壮的类大脑无监督机器学习

  • Making a Spiking Net Work: Robust brain-like unsupervised machine learning 时间:2022年09月01日 第一作者:Peter G. Stratton 链接.
注释 12 pages (manuscript), 5 figures, 10 pages (appendix), 11 pages (extended data)
邮件日期 2022年09月02日

542、基于脉冲神经网络的贝叶斯连续学习

  • Bayesian Continual Learning via Spiking Neural Networks 时间:2022年08月29日 第一作者:Nicolas Skatchkovsky 链接.

摘要:生物智能的主要特征包括能源效率、持续适应能力以及通过不确定性量化进行风险管理。迄今为止,神经形态工程主要是由实现节能机器的目标驱动的,这些机器的灵感来自生物大脑的基于时间的计算范式。在本文中,我们采取步骤设计能够适应变化学习任务的神经形态系统,同时产生校准良好的不确定性量化估计。为此,我们在贝叶斯连续学习框架内导出了脉冲神经网络(SNN)的在线学习规则。在该模型中,每个突触权重由参数表示,这些参数量化了由先验知识和观察数据产生的当前认知不确定性。所提出的在线规则在观察到数据时以流方式更新分布参数。我们为实际值和

英文摘要 Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps towards the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification.
注释 Submitted for journal publication
邮件日期 2022年08月30日

541、Spike摄像机的不确定性引导深度融合

  • Uncertainty Guided Depth Fusion for Spike Camera 时间:2022年08月29日 第一作者:Jianing Li 链接.
注释 18 pages, 11 figures ACM-class: I.2.10
邮件日期 2022年08月30日

540、可伸缩纳米光子电子脉冲神经网络

  • Scalable Nanophotonic-Electronic Spiking Neural Networks 时间:2022年08月28日 第一作者:Luis El Srouji 链接.

摘要:脉冲神经网络(SNN)提供了一种能够高度并行化、实时处理的新计算范式。光子器件是设计与SNN计算范式相匹配的高带宽并行架构的理想选择。CMOS和光子元件的共集成允许低损耗光子器件与模拟电子器件相结合,以提高非线性计算元件的灵活性。因此,我们设计并模拟了单片硅光子学(SiPh)工艺上的光电脉冲神经元电路,该工艺复制了漏积分和激发(LIF)之外的有用脉冲行为。此外,我们还探索了两种具有片上学习潜力的学习算法,使用Mach-Zehnder干涉(MZI)网格作为突触互连。随机反向传播(RPB)的变化在芯片上进行了实验演示,并与简单分类任务的标准线性回归的性能相匹配。同时,对比

英文摘要 Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing. Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm. Co-integration of CMOS and photonic elements allow low-loss photonic devices to be combined with analog electronics for greater flexibility of nonlinear computational elements. As such, we designed and simulated an optoelectronic spiking neuron circuit on a monolithic silicon photonics (SiPh) process that replicates useful spiking behaviors beyond the leaky integrate-and-fire (LIF). Additionally, we explored two learning algorithms with the potential for on-chip learning using Mach-Zehnder Interferometric (MZI) meshes as synaptic interconnects. A variation of Random Backpropagation (RPB) was experimentally demonstrated on-chip and matched the performance of a standard linear regression on a simple classification task. Meanwhile, the Contrastive Hebbian Learning (CHL) rule was applied to a simulated neural network composed of MZI meshes for a random input-output mapping task. The CHL-trained MZI network performed better than random guessing but does not match the performance of the ideal neural network (without the constraints imposed by the MZI meshes). Through these efforts, we demonstrate that co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
邮件日期 2022年08月30日

539、使用Rockpool和Xylo的Sub-mW神经形态SNN音频处理应用

  • Sub-mW Neuromorphic SNN audio processing applications with Rockpool and Xylo 时间:2022年08月27日 第一作者:Hannah Bos 链接.

摘要:脉冲神经网络(SNN)为时间信号处理提供了一种有效的计算机制,特别是当与低功率SNN推理ASIC耦合时。SNN在历史上很难配置,缺乏为任意任务找到解决方案的通用方法。近年来,梯度下降优化方法越来越容易地应用于SNN。因此,SNN和SNN推理处理器为能源受限环境中的商业低功耗信号处理提供了良好的平台,而无需依赖云。然而,到目前为止,行业中的ML工程师还无法使用这些方法,需要研究生级别的培训才能成功配置单个SNN应用程序。在这里,我们演示了一种方便的高级流水线,用于设计、训练和部署任意时间信号处理应用程序到子mW SNN推理硬件。我们采用了一种新的直接SNN架构,用于时间信号处理

英文摘要 Spiking Neural Networks (SNNs) provide an efficient computational mechanism for temporal signal processing, especially when coupled with low-power SNN inference ASICs. SNNs have been historically difficult to configure, lacking a general method for finding solutions for arbitrary tasks. In recent years, gradient-descent optimization methods have been applied to SNNs with increasing ease. SNNs and SNN inference processors therefore offer a good platform for commercial low-power signal processing in energy constrained environments without cloud dependencies. However, to date these methods have not been accessible to ML engineers in industry, requiring graduate-level training to successfully configure a single SNN application. Here we demonstrate a convenient high-level pipeline to design, train and deploy arbitrary temporal signal processing applications to sub-mW SNN inference hardware. We apply a new straightforward SNN architecture designed for temporal signal processing, using a pyramid of synaptic time constants to extract signal features at a range of temporal scales. We demonstrate this architecture on an ambient audio classification task, deployed to the Xylo SNN inference processor in streaming mode. Our application achieves high accuracy (98%) and low latency (100ms) at low power (<4muW inference power). Our approach makes training and deploying SNN applications available to ML engineers with general NN backgrounds, without requiring specific prior experience with spiking NNs. We intend for our approach to make Neuromorphic hardware and SNNs an attractive choice for commercial low-power and edge signal processing applications.
邮件日期 2022年08月30日

538、基于共振网络的神经形态视觉场景理解

  • Neuromorphic Visual Scene Understanding with Resonator Networks 时间:2022年08月26日 第一作者:Alpha Renner 链接.

摘要:推断对象的位置及其刚性变换仍然是视觉场景理解中的一个开放问题。在这里,我们提出了一种神经形态解,该解利用基于三个关键概念的有效因子分解网络:(1)基于具有复值向量的向量符号架构(VSA)的计算框架;(2) 设计分层谐振器网络(HRN),以处理视觉场景中平移和旋转的非交换性质,当两者结合使用时;(3) 用于在神经形态硬件上实现复值向量绑定的多室脉冲相量神经元模型的设计。VSA框架使用向量绑定操作生成生成图像模型,其中绑定充当几何变换的等变操作。因此,场景可以被描述为向量乘积的和,而向量乘积又可以被谐振器网络有效地分解以推断对象及其姿态。

英文摘要 Inferring the position of objects and their rigid transformations is still an open problem in visual scene understanding. Here we propose a neuromorphic solution that utilizes an efficient factorization network which is based on three key concepts: (1) a computational framework based on Vector Symbolic Architectures (VSA) with complex-valued vectors; (2) the design of Hierarchical Resonator Networks (HRN) to deal with the non-commutative nature of translation and rotation in visual scenes, when both are used in combination; (3) the design of a multi-compartment spiking phasor neuron model for implementing complex-valued vector binding on neuromorphic hardware. The VSA framework uses vector binding operations to produce generative image models in which binding acts as the equivariant operation for geometric transformations. A scene can therefore be described as a sum of vector products, which in turn can be efficiently factorized by a resonator network to infer objects and their poses. The HRN enables the definition of a partitioned architecture in which vector binding is equivariant for horizontal and vertical translation within one partition, and for rotation and scaling within the other partition. The spiking neuron model allows to map the resonator network onto efficient and low-power neuromorphic hardware. In this work, we demonstrate our approach using synthetic scenes composed of simple 2D shapes undergoing rigid geometric transformations and color changes. A companion paper demonstrates this approach in real-world application scenarios for machine vision and robotics.
注释 15 pages, 6 figures ACM-class: I.4.8
邮件日期 2022年08月30日

537、基于跨模态跨领域知识转移的无监督脉冲深度估计

  • Unsupervised Spike Depth Estimation via Cross-modality Cross-domain Knowledge Transfer 时间:2022年08月26日 第一作者:Jiaming Liu 链接.

摘要:神经形态spike摄像机以生物启发的方式生成具有高时间分辨率的数据流,在自动驾驶等现实世界应用中具有巨大潜力。与RGB流相比,脉冲流具有克服运动模糊的固有优势,从而为高速对象提供更精确的深度估计。然而,以有监督的方式训练脉冲深度估计网络几乎是不可能的,因为对于时间密集的脉冲流获得成对的深度标签是极其困难和具有挑战性的。在本文中,我们从开源RGB数据集(如KITTI)转移知识,并以无监督的方式估计脉冲深度,而不是构建具有全深度标签的脉冲流数据集。这类问题的关键挑战在于RGB和棘波模式之间的模态间隙,以及标记源RGB和未标记目标棘波域之间的域间隙。为了克服这些挑战,我们引入了cross-m

英文摘要 The neuromorphic spike camera generates data streams with high temporal resolution in a bio-inspired way, which has vast potential in the real-world applications such as autonomous driving. In contrast to RGB streams, spike streams have an inherent advantage to overcome motion blur, leading to more accurate depth estimation for high-velocity objects. However, training the spike depth estimation network in a supervised manner is almost impossible since it is extremely laborious and challenging to obtain paired depth labels for temporally intensive spike streams. In this paper, instead of building a spike stream dataset with full depth labels, we transfer knowledge from the open-source RGB datasets (e.g., KITTI) and estimate spike depth in an unsupervised manner. The key challenges for such problem lie in the modality gap between RGB and spike modalities, and the domain gap between labeled source RGB and unlabeled target spike domains. To overcome these challenges, we introduce a cross-modality cross-domain (BiCross) framework for unsupervised spike depth estimation. Our method narrows the enormous gap between source RGB and target spike by introducing the mediate simulated source spike domain. To be specific, for the cross-modality phase, we propose a novel Coarse-to-Fine Knowledge Distillation (CFKD), which transfers the image and pixel level knowledge from source RGB to source spike. Such design leverages the abundant semantic and dense temporal information of RGB and spike modalities respectively. For the cross-domain phase, we introduce the Uncertainty Guided Mean-Teacher (UGMT) to generate reliable pseudo labels with uncertainty estimation, alleviating the shift between the source spike and target spike domains. Besides, we propose a Global-Level Feature Alignment method (GLFA) to align the feature between two domains and generate more reliable pseudo labels.
邮件日期 2022年08月29日

536、Spike摄像机的不确定性引导深度融合

  • Uncertainty Guided Depth Fusion for Spike Camera 时间:2022年08月26日 第一作者:Jianing Li 链接.

摘要:深度估计对于各种重要的实际应用(如自动驾驶)至关重要。然而,由于传统摄像机只能捕获模糊图像,因此在高速场景下,它的性能会严重下降。为了解决这个问题,spike摄像机被设计为在高帧速率下捕获像素级亮度强度。然而,使用基于光度一致性的传统单目或立体深度估计算法,使用spike相机进行深度估计仍然非常具有挑战性。在本文中,我们提出了一种新的不确定性引导深度融合(UGDF)框架,用于融合单目和立体深度估计网络的预测。我们的框架是基于这样一个事实,即立体脉冲深度估计在近距离获得更好的结果,而单目脉冲深度估算在远距离获得更好的效果。因此,我们引入了一种具有联合训练的双任务深度估计架构

英文摘要 Depth estimation is essential for various important real-world applications such as autonomous driving. However, it suffers from severe performance degradation in high-velocity scenario since traditional cameras can only capture blurred images. To deal with this problem, the spike camera is designed to capture the pixel-wise luminance intensity at high frame rate. However, depth estimation with spike camera remains very challenging using traditional monocular or stereo depth estimation algorithms, which are based on the photometric consistency. In this paper, we propose a novel Uncertainty-Guided Depth Fusion (UGDF) framework to fuse the predictions of monocular and stereo depth estimation networks for spike camera. Our framework is motivated by the fact that stereo spike depth estimation achieves better results at close range while monocular spike depth estimation obtains better results at long range. Therefore, we introduce a dual-task depth estimation architecture with a joint training strategy and estimate the distributed uncertainty to fuse the monocular and stereo results. In order to demonstrate the advantage of spike depth estimation over traditional camera depth estimation, we contribute a spike-depth dataset named CitySpike20K, which contains 20K paired samples, for spike depth estimation. UGDF achieves state-of-the-art results on CitySpike20K, surpassing all monocular or stereo spike depth estimation baselines. We conduct extensive experiments to evaluate the effectiveness and generalization of our method on CitySpike20K. To the best of our knowledge, our framework is the first dual-task fusion framework for spike camera depth estimation. Code and dataset will be released.
注释 18 pages, 11 figures, submitted to AAAI 2023 ACM-class: I.2.10
邮件日期 2022年08月29日

535、用于时域模拟脉冲神经网络的基于CMOS的面积和功率高效神经元和突触电路

  • CMOS-based area-and-power-efficient neuron and synapse circuits for time-domain analog spiking neural networks 时间:2022年08月25日 第一作者:Xiangyu Chen 链接.

摘要:传统的神经结构倾向于通过模拟量(例如电流或电压)进行通信,然而,随着CMOS器件的缩小和电源电压的降低,电压/电流域模拟电路的动态范围变得更窄,可用裕度变得更小,并且噪声抗扰度降低。除此之外,在传统设计中使用运算放大器(运算放大器)和时钟或异步比较器导致高能耗和大芯片面积,这将不利于建立脉冲神经网络。鉴于此,我们提出了一种用于生成和传输时域信号的神经结构,包括神经元模块、突触模块和两个权重模块。所提出的神经结构由晶体管三极管区域中的漏电流驱动,并且不使用运算放大器和比较器,因此与传统设计相比提供了更高的能量和面积效率。此外,该结构提供了更大的抗噪声能力

英文摘要 Conventional neural structures tend to communicate through analog quantities such as currents or voltages, however, as CMOS devices shrink and supply voltages decrease, the dynamic range of voltage/current-domain analog circuits becomes narrower, the available margin becomes smaller, and noise immunity decreases. More than that, the use of operational amplifiers (op-amps) and clocked or asynchronous comparators in conventional designs leads to high energy consumption and large chip area, which would be detrimental to building spiking neural networks. In view of this, we propose a neural structure for generating and transmitting time-domain signals, including a neuron module, a synapse module, and two weight modules. The proposed neural structure is driven by leakage currents in the transistor triode region and does not use op-amps and comparators, thus providing higher energy and area efficiency compared to conventional designs. In addition, the structure provides greater noise immunity due to internal communication via time-domain signals, which simplifies the wiring between the modules. The proposed neural structure is fabricated using TSMC 65 nm CMOS technology. The proposed neuron and synapse occupy an area of 127 um2 and 231 um2, respectively, while achieving millisecond time constants. Actual chip measurements show that the proposed structure successfully implements the temporal signal communication function with millisecond time constants, which is a critical step toward hardware reservoir computing for human-computer interaction.
邮件日期 2022年08月26日

534、通过直接训练的深度脉冲Q网络实现人的水平控制

  • Human-Level Control through Directly-Trained Deep Spiking Q-Networks 时间:2022年08月25日 第一作者:Guisong Liu 链接.
邮件日期 2022年08月26日

533、估算:一个28nm亚平方毫米任务不可知脉冲循环神经网络处理器,支持在秒长时间尺度上进行片上学习

  • ReckOn: A 28nm Sub-mm2 Task-Agnostic Spiking Recurrent Neural Network Processor Enabling On-Chip Learning over Second-Long Timescales 时间:2022年08月20日 第一作者:Charlotte Frenkel 链接.

摘要:自主边缘设备的强大现实部署需要片上自适应,以适应用户、环境和任务引起的变化。由于芯片内存限制,先前的学习设备仅限于没有时间内容的静态刺激。我们提出了一个0.45毫米$^2$的峰值RNN处理器,使任务无关的在线学习在几秒钟内实现,我们演示了在0.8%的内存开销和<150-$\mu$W的训练功率预算内进行导航、手势识别和关键字识别。

英文摘要 A robust real-world deployment of autonomous edge devices requires on-chip adaptation to user-, environment- and task-induced variability. Due to on-chip memory constraints, prior learning devices were limited to static stimuli with no temporal contents. We propose a 0.45-mm$^2$ spiking RNN processor enabling task-agnostic online learning over seconds, which we demonstrate for navigation, gesture recognition, and keyword spotting within a 0.8-% memory overhead and a <150-$\mu$W training power budget.
注释 Published in the 2022 IEEE International Solid-State Circuits Conference (ISSCC), 2022 DOI: 10.1109/ISSCC42614.2022.9731734
邮件日期 2022年08月23日

532、基于脉冲神经网络的相干伊辛机组合优化求解

  • Combinatorial optimization solving by coherent Ising machines based on spiking neural networks 时间:2022年08月16日 第一作者:Bo Lu 链接.

摘要:脉冲神经网络是一种神经形态计算,被认为可以提高智能水平,为量子计算提供优势。在这项工作中,我们通过设计一个光学脉冲神经网络来解决这个问题,并证明它可以用来加快计算速度,特别是在组合优化问题上。这里,脉冲神经网络由反对称耦合简并光学参量振荡器脉冲和耗散脉冲构成。根据脉冲神经元的动态行为,选择非线性传递函数来缓解振幅不均匀性并使产生的局部极小值不稳定。数值结果表明,脉冲神经网络相干伊辛机在组合优化问题上具有优异的性能,有望为神经计算和光学计算提供新的应用。

英文摘要 Spiking neural network is a kind of neuromorphic computing which is believed to improve on the level of intelligence and provide advabtages for quantum computing. In this work, we address this issue by designing an optical spiking neural network and prove that it can be used to accelerate the speed of computation, especially on the combinatorial optimization problems. Here the spiking neural network is constructed by the antisymmetrically coupled degenerate optical parametric oscillator pulses and dissipative pulses. A nonlinear transfer function is chosen to mitigate amplitude inhomogeneities and destabilize the resulting local minima according to the dynamical behavior of spiking neurons. It is numerically proved that the spiking neural network-coherent Ising machines has excellent performance on combinatorial optimization problems, for which is expected to offer a new applications for neural computing and optical computing.
注释 5 pages, 4 figures, comments are welcome
邮件日期 2022年08月17日

531、通过脉冲神经网络扩展动态图表示学习

  • Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks 时间:2022年08月15日 第一作者:Jintang Li 链接.

摘要:近年来,动态图表示学习的研究出现了激增,其目的是对动态且随时间不断演化的时态图建模。然而,当前的工作通常使用递归神经网络(RNN)来建模图动态,这使得它们在大型时态图上的计算和内存开销严重。到目前为止,大型时态图的动态图表示学习的可扩展性仍然是主要挑战之一。在本文中,我们提出了一个可扩展的框架,即SpikeNet,以有效地捕获时态图的时态和结构模式。我们探索了一个新的方向,即我们可以用脉冲神经网络(SNN)而不是RNN来捕捉时态图的演化动态。作为RNN的低功耗替代方案,SNN明确地将图动力学建模为神经元种群的脉冲序列,并以有效的方式实现基于脉冲的传播。三个大型实时图数据集的实验

英文摘要 Recent years have seen a surge in research on dynamic graph representation learning, which aims to model temporal graphs that are dynamic and evolving constantly over time. However, current work typically models graph dynamics with recurrent neural networks (RNNs), making them suffer seriously from computation and memory overheads on large temporal graphs. So far, scalability of dynamic graph representation learning on large temporal graphs remains one of the major challenges. In this paper, we present a scalable framework, namely SpikeNet, to efficiently capture the temporal and structural patterns of temporal graphs. We explore a new direction in that we can capture the evolving dynamics of temporal graphs with spiking neural networks (SNNs) instead of RNNs. As a low-power alternative to RNNs, SNNs explicitly model graph dynamics as spike trains of neuron populations and enable spike-based propagation in an efficient way. Experiments on three large real-world temporal graph datasets demonstrate that SpikeNet outperforms strong baselines on the temporal node classification task with lower computational costs. Particularly, SpikeNet generalizes to a large temporal graph (2M nodes and 13M edges) with significantly fewer parameters and computation overheads. Our code is publicly available at https://github.com/EdisonLeeeee/SpikeNet
注释 Preprint; Code available at https://github.com/EdisonLeeeee/SpikeNet
邮件日期 2022年08月23日

530、使用虚拟神经元在神经形态计算机上编码整数和有理数

  • Encoding Integers and Rationals on Neuromorphic Computers using Virtual Neuron 时间:2022年08月15日 第一作者:Prasanna Date 链接.

摘要:神经形态计算机通过模拟人脑进行计算,并使用极低的功耗。预计它们在未来的节能计算中是不可或缺的。虽然它们主要用于基于脉冲神经网络的机器学习应用,但已知神经形态计算机是图灵完备的,因此能够进行通用计算。然而,为了充分实现其通用、节能计算的潜力,设计高效的数字编码机制非常重要。当前的编码方法具有有限的适用性,并且可能不适合于通用计算。在本文中,我们提出了虚拟神经元作为整数和有理数的编码机制。我们评估了虚拟神经元在物理和模拟神经形态硬件上的性能,并表明它可以使用基于混合信号忆阻器的神经形态过程平均使用23 nJ的能量执行加法运算

英文摘要 Neuromorphic computers perform computations by emulating the human brain, and use extremely low power. They are expected to be indispensable for energy-efficient computing in the future. While they are primarily used in spiking neural network-based machine learning applications, neuromorphic computers are known to be Turing-complete, and thus, capable of general-purpose computation. However, to fully realize their potential for general-purpose, energy-efficient computing, it is important to devise efficient mechanisms for encoding numbers. Current encoding approaches have limited applicability and may not be suitable for general-purpose computation. In this paper, we present the virtual neuron as an encoding mechanism for integers and rational numbers. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware and show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor. We also demonstrate its utility by using it in some of the mu-recursive functions, which are the building blocks of general-purpose computation.
邮件日期 2022年08月17日

529、利用脑电图检测预期脑电位的卷积脉冲神经网络

  • Convolutional Spiking Neural Networks for Detecting Anticipatory Brain Potentials Using Electroencephalogram 时间:2022年08月14日 第一作者:Nathan Lutes 链接.

摘要:脉冲神经网络(SNN)作为一种开发“生物学上合理的”机器学习模型的手段,正受到越来越多的关注。这些网络模拟人脑中的突触连接并产生脉冲序列,可以用二进制值近似,从而避免了浮点运算电路的高计算成本。最近,引入了卷积层,将卷积网络的特征提取能力与SNN的计算效率结合起来。本文研究了使用卷积脉冲神经网络(CSNN)作为分类器,使用脑电图(EEG)检测与人类参与者制动意图相关的预期慢皮层电位的可行性。EEG数据是在一项实验中收集的,其中参与者在设计用于模拟城市环境的试验台上操作遥控车辆。通过音频提醒参与者即将发生的制动事件

英文摘要 Spiking neural networks (SNNs) are receiving increased attention as a means to develop "biologically plausible" machine learning models. These networks mimic synaptic connections in the human brain and produce spike trains, which can be approximated by binary values, precluding high computational cost with floating-point arithmetic circuits. Recently, the addition of convolutional layers to combine the feature extraction power of convolutional networks with the computational efficiency of SNNs has been introduced. In this paper, the feasibility of using a convolutional spiking neural network (CSNN) as a classifier to detect anticipatory slow cortical potentials related to braking intention in human participants using an electroencephalogram (EEG) was studied. The EEG data was collected during an experiment wherein participants operated a remote controlled vehicle on a testbed designed to simulate an urban environment. Participants were alerted to an incoming braking event via an audio countdown to elicit anticipatory potentials that were then measured using an EEG. The CSNN's performance was compared to a standard convolutional neural network (CNN) and three graph neural networks (GNNs) via 10-fold cross-validation. The results showed that the CSNN outperformed the other neural networks.
注释 10 pages, 5 figures, IEEE transaction on Neural Networks submission
邮件日期 2022年08月16日

528、一种用于能量有效的深度脉冲神经网络处理器设计的时间到第一脉冲编码和转换感知训练

  • A Time-to-first-spike Coding and Conversion Aware Training for Energy-Efficient Deep Spiking Neural Network Processor Design 时间:2022年08月09日 第一作者:Dongwoo Lew 链接.

摘要:在本文中,我们提出了一种能量高效的SNN架构,它可以无缝运行深度脉冲神经网络(SNN),并提高精度。首先,我们提出了一种转换感知训练(CAT),以减少ANN到SNN的转换损失,而无需硬件实现开销。在所提出的CAT中,有效地利用了为模拟ANN训练期间的SNN而开发的激活函数,以减少转换后的数据表示误差。基于CAT技术,我们还提出了一种时间到第一脉冲编码,该编码允许利用脉冲时间信息进行轻量级对数计算。支持所提出技术的SNN处理器设计已使用28nm CMOS工艺实现。当运行具有5位对数权重的VGG-16时,处理器分别以486.7uJ、503.6uJ和1426uJ的推理能量处理CIFAR-10、CIFAR-100和Tiny ImageNet,达到了91.7%、67.9%和57.4%的顶级精度。

英文摘要 In this paper, we present an energy-efficient SNN architecture, which can seamlessly run deep spiking neural networks (SNNs) with improved accuracy. First, we propose a conversion aware training (CAT) to reduce ANN-to-SNN conversion loss without hardware implementation overhead. In the proposed CAT, the activation function developed for simulating SNN during ANN training, is efficiently exploited to reduce the data representation error after conversion. Based on the CAT technique, we also present a time-to-first-spike coding that allows lightweight logarithmic computation by utilizing spike time information. The SNN processor design that supports the proposed techniques has been implemented using 28nm CMOS process. The processor achieves the top-1 accuracies of 91.7%, 67.9% and 57.4% with inference energy of 486.7uJ, 503.6uJ, and 1426uJ to process CIFAR-10, CIFAR-100, and Tiny-ImageNet, respectively, when running VGG-16 with 5bit logarithmic weights.
注释 Accepted to Design Automation Conference 2022 DOI: 10.1145/3489517.3530457
邮件日期 2022年08月10日

527、使用自动编码器消除感应电机噪音

  • Denoising Induction Motor Sounds Using an Autoencoder 时间:2022年08月08日 第一作者:Thanh Tran 链接.

摘要:去噪是在改善声音信号的质量和充分性的同时从声音信号中去除噪声的过程。声音去噪在语音处理、声音事件分类和机器故障检测系统中有许多应用。本文描述了一种创建自动编码器的方法,用于将有噪声的机器声音映射到干净的声音,以达到去噪目的。声音中有几种类型的噪声,例如环境噪声和信号处理方法产生的频率相关噪声。环境活动产生的噪声为环境噪声。在工厂内,车辆、钻井、在测量区域工作或谈话的人员、风和流水会产生环境噪声。这些噪音在录音中表现为脉冲。在本文的范围内,我们以感应电机的水槽水龙头噪声为例,演示了高斯分布产生的噪声和环境噪声的去除

英文摘要 Denoising is the process of removing noise from sound signals while improving the quality and adequacy of the sound signals. Denoising sound has many applications in speech processing, sound events classification, and machine failure detection systems. This paper describes a method for creating an autoencoder to map noisy machine sounds to clean sounds for denoising purposes. There are several types of noise in sounds, for example, environmental noise and generated frequency-dependent noise from signal processing methods. Noise generated by environmental activities is environmental noise. In the factory, environmental noise can be created by vehicles, drilling, people working or talking in the survey area, wind, and flowing water. Those noises appear as spikes in the sound record. In the scope of this paper, we demonstrate the removal of generated noise with Gaussian distribution and the environmental noise with a specific example of the water sink faucet noise from the induction motor sounds. The proposed method was trained and verified on 49 normal function sounds and 197 horizontal misalignment fault sounds from the Machinery Fault Database (MAFAULDA). The mean square error (MSE) was used as the assessment criteria to evaluate the similarity between denoised sounds using the proposed autoencoder and the original sounds in the test set. The MSE is below or equal to 0.14 when denoise both types of noises on 15 testing sounds of the normal function category. The MSE is below or equal to 0.15 when denoising 60 testing sounds on the horizontal misalignment fault category. The low MSE shows that both the generated Gaussian noise and the environmental noise were almost removed from the original sounds with the proposed trained autoencoder.
注释 9 pages, 10 figures, conference
邮件日期 2022年08月10日

526、用于数据流连续学习的脉冲神经预测编码

  • Spiking Neural Predictive Coding for Continual Learning from Data Streams 时间:2022年08月08日 第一作者:Alex 链接.
注释 Newest revised version of manuscript
邮件日期 2022年08月09日

525、带脉冲神经网络的神经符号计算

  • Neuro-symbolic computing with spiking neural networks 时间:2022年08月04日 第一作者:Dominik Dold 链接.

摘要:知识图是一种表现力强、使用广泛的数据结构,因为它们能够以合理和机器可读的方式集成来自不同领域的数据。因此,它们可以用于模拟各种系统,如分子和社交网络。然而,如何在脉冲系统中实现符号推理,以及如何将脉冲神经网络应用于此类图形数据,仍然是一个悬而未决的问题。在这里,我们通过演示如何使用脉冲神经元对符号和多关系信息进行编码,扩展了先前关于基于脉冲的图算法的工作,允许使用脉冲神经网络对符号结构(如知识图)进行推理。引入的框架是通过结合图嵌入范式和使用误差反向传播训练脉冲神经网络的最新进展而实现的。所提出的方法适用于各种脉冲神经元模型,并可与其他方法相结合进行端到端训练

英文摘要 Knowledge graphs are an expressive and widely used data structure due to their ability to integrate data from different domains in a sensible and machine-readable way. Thus, they can be used to model a variety of systems such as molecules and social networks. However, it still remains an open question how symbolic reasoning could be realized in spiking systems and, therefore, how spiking neural networks could be applied to such graph data. Here, we extend previous work on spike-based graph algorithms by demonstrating how symbolic and multi-relational information can be encoded using spiking neurons, allowing reasoning over symbolic structures like knowledge graphs with spiking neural networks. The introduced framework is enabled by combining the graph embedding paradigm and the recent progress in training spiking neural networks using error backpropagation. The presented methods are applicable to a variety of spiking neuron models and can be trained end-to-end in combination with other differentiable network architectures, which we demonstrate by implementing a spiking relational graph neural network.
注释 Accepted for publication at the International Conference on Neuromorphic Systems (ICONS) 2022
邮件日期 2022年08月05日

524、使用CycleGAN和随机生成的数据集进行黑白轮廓图像的样式转换

  • Style Transfer of Black and White Silhouette Images using CycleGAN and a Randomly Generated Dataset 时间:2022年08月03日 第一作者:Worasait Suwannik 链接.

摘要:CycleGAN可用于将艺术风格转换为图像。它不需要成对的源图像和样式化图像来训练模型。利用这一优势,我们建议使用随机生成的数据来训练机器学习模型,该模型可以将传统艺术风格转换为黑白轮廓图像。该结果明显优于先前的神经类型转移方法。然而,还有一些方面需要改进,例如从变换图像中去除伪影和脉冲。

英文摘要 CycleGAN can be used to transfer an artistic style to an image. It does not require pairs of source and stylized images to train a model. Taking this advantage, we propose using randomly generated data to train a machine learning model that can transfer traditional art style to a black and white silhouette image. The result is noticeably better than the previous neural style transfer methods. However, there are some areas for improvement, such as removing artifacts and spikes from the transformed image.
邮件日期 2022年08月09日

523、LaneSNNs:用于Loihi神经形态处理器上车道检测的脉冲神经网络

  • LaneSNNs: Spiking Neural Networks for Lane Detection on the Loihi Neuromorphic Processor 时间:2022年08月03日 第一作者:Alberto Viale 链接.

摘要:与自动驾驶(AD)相关的功能是下一代移动机器人和自动车辆的重要组成部分,其重点是日益智能、自主和互联的系统。根据定义,涉及使用这些功能的应用程序必须提供实时决策,而这一特性是避免灾难性事故的关键。此外,所有决策过程必须要求低功耗,以增加电池驱动系统的寿命和自主性。这些挑战可以通过在神经形态芯片上有效实现脉冲神经网络(SNN)和使用基于事件的摄像机而不是传统的基于帧的摄像机来解决。在本文中,我们提出了一种新的基于SNN的方法,称为LaneSNN,用于使用基于事件的摄像机输入检测街道上标记的车道。我们开发了四种具有低复杂度和快速响应特性的新型SNN模型,并使用离线监督学习对其进行训练

英文摘要 Autonomous Driving (AD) related features represent important elements for the next generation of mobile robots and autonomous vehicles focused on increasingly intelligent, autonomous, and interconnected systems. The applications involving the use of these features must provide, by definition, real-time decisions, and this property is key to avoid catastrophic accidents. Moreover, all the decision processes must require low power consumption, to increase the lifetime and autonomy of battery-driven systems. These challenges can be addressed through efficient implementations of Spiking Neural Networks (SNNs) on Neuromorphic Chips and the use of event-based cameras instead of traditional frame-based cameras. In this paper, we present a new SNN-based approach, called LaneSNN, for detecting the lanes marked on the streets using the event-based camera input. We develop four novel SNN models characterized by low complexity and fast response, and train them using an offline supervised learning rule. Afterward, we implement and map the learned SNNs models onto the Intel Loihi Neuromorphic Research Chip. For the loss function, we develop a novel method based on the linear composition of Weighted binary Cross Entropy (WCE) and Mean Squared Error (MSE) measures. Our experimental results show a maximum Intersection over Union (IoU) measure of about 0.62 and very low power consumption of about 1 W. The best IoU is achieved with an SNN implementation that occupies only 36 neurocores on the Loihi processor while providing a low latency of less than 8 ms to recognize an image, thereby enabling real-time performance. The IoU measures provided by our networks are comparable with the state-of-the-art, but at a much low power consumption of 1 W.
注释 To appear at the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)
邮件日期 2022年08月05日

522、制造脉冲网络:健壮的类大脑无监督机器学习

  • Making a Spiking Net Work: Robust brain-like unsupervised machine learning 时间:2022年08月02日 第一作者:Peter G. Stratton 链接.

摘要:在过去十年中,人工智能(AI)的兴趣激增几乎完全是由人工神经网络(ANN)的进步推动的。虽然人工神经网络为许多以前难以解决的问题提供了最先进的性能,但它们需要大量的数据和计算资源用于训练,并且由于它们采用监督学习,它们通常需要知道每个训练示例的正确标记响应,从而限制了它们在现实领域的可扩展性。脉冲神经网络(SNN)是神经网络的一种替代方案,它使用更多类似大脑的人工神经元,并且可以使用无监督学习来发现输入数据中的可识别特征,而无需知道正确的响应。然而,SNN难以保持动态稳定性,无法与ANN的精度相匹配。在这里,我们展示了SNN如何克服文献中确定的许多缺点,包括为消失脉冲问题提供一个原则性解决方案

英文摘要 The surge in interest in Artificial Intelligence (AI) over the past decade has been driven almost exclusively by advances in Artificial Neural Networks (ANNs). While ANNs set state-of-the-art performance for many previously intractable problems, they require large amounts of data and computational resources for training, and since they employ supervised learning they typically need to know the correctly labelled response for every training example, limiting their scalability for real-world domains. Spiking Neural Networks (SNNs) are an alternative to ANNs that use more brain-like artificial neurons and can use unsupervised learning to discover recognizable features in the input data without knowing correct responses. SNNs, however, struggle with dynamical stability and cannot match the accuracy of ANNs. Here we show how an SNN can overcome many of the shortcomings that have been identified in the literature, including offering a principled solution to the vanishing spike problem, to outperform all existing shallow SNNs and equal the performance of an ANN. It accomplishes this while using unsupervised learning with unlabeled data and only 1/50th of the training epochs (labelled data is used only for a final simple linear readout layer). This result makes SNNs a viable new method for fast, accurate, efficient, explainable, and re-deployable machine learning with unlabeled datasets.
注释 12 pages (manuscript), 10 pages (appendix), 10 pages (extended data)
邮件日期 2022年08月03日

521、MT-SNN:实现多任务单任务的脉冲神经网络

  • MT-SNN: Spiking Neural Network that Enables Single-Tasking of Multiple Tasks 时间:2022年08月02日 第一作者:Paolo G. Cachi 链接.

摘要:在本文中,我们探索了脉冲神经网络在解决多任务分类问题中使用多任务单任务方法的能力。我们设计并实现了一个多任务脉冲神经网络(MT-SNN),它可以在一次执行一个任务的同时学习两个或多个分类任务。通过调节本工作中使用的漏积分和激发神经元的激发阈值来选择要执行的任务。该网络使用Intel的Loihi2神经形态芯片的Lava平台实现。对NMNIST数据的动态多任务分类进行了测试。结果表明,MT-SNN通过修改其动力学,即脉冲神经元的放电阈值,有效地学习多任务。

英文摘要 In this paper we explore capabilities of spiking neural networks in solving multi-task classification problems using the approach of single-tasking of multiple tasks. We designed and implemented a multi-task spiking neural network (MT-SNN) that can learn two or more classification tasks while performing one task at a time. The task to perform is selected by modulating the firing threshold of leaky integrate and fire neurons used in this work. The network is implemented using Intel's Lava platform for the Loihi2 neuromorphic chip. Tests are performed on dynamic multitask classification for NMNIST data. The results show that MT-SNN effectively learns multiple tasks by modifying its dynamics, namely, the spiking neurons' firing threshold.
注释 4 pages, 2 figures
邮件日期 2022年08月03日

520、卷积网络的脉冲图

  • Spiking Graph Convolutional Networks 时间:2022年08月02日 第一作者:Zulun Zhu 链接.
注释 Accepted by IJCAI 2022; Code available at https://github.com/ZulunZhu/SpikingGCN
邮件日期 2022年08月03日

519、enpheeph:一种用于脉冲和压缩深度神经网络的故障注入框架

  • enpheeph: A Fault Injection Framework for Spiking and Compressed Deep Neural Networks 时间:2022年07月31日 第一作者:Alessio Colucci 链接.

摘要:深度神经网络(DNN)的研究侧重于提高实际部署的性能和准确性,从而产生了新的模型,如脉冲神经网络(SNN),以及优化技术,如压缩网络的量化和修剪。然而,这些创新模型和优化技术的部署引入了可能的可靠性问题,这是DNN广泛用于安全关键应用(如自动驾驶)的支柱。此外,扩展技术节点具有同时发生多个故障的相关风险,这一可能性在最先进的弹性分析中没有解决。为了更好地分析DNN的可靠性,我们提出了enpheeph,这是一种用于脉冲和压缩DNN的故障注入框架。enpheeph框架支持在专用硬件设备(如GPU)上优化执行,同时提供完全的可定制性,以调查不同的故障模型,模拟各种可靠性测试

英文摘要 Research on Deep Neural Networks (DNNs) has focused on improving performance and accuracy for real-world deployments, leading to new models, such as Spiking Neural Networks (SNNs), and optimization techniques, e.g., quantization and pruning for compressed networks. However, the deployment of these innovative models and optimization techniques introduces possible reliability issues, which is a pillar for DNNs to be widely used in safety-critical applications, e.g., autonomous driving. Moreover, scaling technology nodes have the associated risk of multiple faults happening at the same time, a possibility not addressed in state-of-the-art resiliency analyses. Towards better reliability analysis for DNNs, we present enpheeph, a Fault Injection Framework for Spiking and Compressed DNNs. The enpheeph framework enables optimized execution on specialized hardware devices, e.g., GPUs, while providing complete customizability to investigate different fault models, emulating various reliability constraints and use-cases. Hence, the faults can be executed on SNNs as well as compressed networks with minimal-to-none modifications to the underlying code, a feat that is not achievable by other state-of-the-art tools. To evaluate our enpheeph framework, we analyze the resiliency of different DNN and SNN models, with different compression techniques. By injecting a random and increasing number of faults, we show that DNNs can show a reduction in accuracy with a fault rate as low as 7 x 10 ^ (-7) faults per parameter, with an accuracy drop higher than 40%. Run-time overhead when executing enpheeph is less than 20% of the baseline execution time when executing 100 000 faults concurrently, at least 10x lower than state-of-the-art frameworks, making enpheeph future-proof for complex fault injection scenarios. We release enpheeph at https://github.com/Alexei95/enpheeph.
注释 Source code: https://github.com/Alexei95/enpheeph To appear at 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October, 2022
邮件日期 2022年08月02日

518、具有精度损失估计器的超低延迟自适应局部二进制脉冲神经网络

  • Ultra-low Latency Adaptive Local Binary Spiking Neural Network with Accuracy Loss Estimator 时间:2022年07月31日 第一作者:Changqing Xu 链接.

摘要:脉冲神经网络(SNN)是一种受大脑启发的模型,具有更强的时空信息处理能力和计算能量效率。然而,随着SNN深度的增加,SNN权重引起的记忆问题逐渐引起关注。受人工神经网络(ANN)量化技术的启发,引入二值化SNN(BSNN)来解决记忆问题。由于缺乏合适的学习算法,BSNN通常通过ANN到SNN的转换获得,其精度将受到训练的ANN的限制。在本文中,我们提出了一种具有精度损失估计器的超低延迟自适应局部二进制脉冲神经网络(ALBSNN),该网络通过评估网络学习过程中二值化权重引起的误差,动态选择要二值化的网络层,以确保网络的精度。实验结果表明,该方法可以在不损失网络的情况下,将存储空间减少20%以上

英文摘要 Spiking neural network (SNN) is a brain-inspired model which has more spatio-temporal information processing capacity and computational energy efficiency. However, with the increasing depth of SNNs, the memory problem caused by the weights of SNNs has gradually attracted attention. Inspired by Artificial Neural Networks (ANNs) quantization technology, binarized SNN (BSNN) is introduced to solve the memory problem. Due to the lack of suitable learning algorithms, BSNN is usually obtained by ANN-to-SNN conversion, whose accuracy will be limited by the trained ANNs. In this paper, we propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators, which dynamically selects the network layers to be binarized to ensure the accuracy of the network by evaluating the error caused by the binarized weights during the network learning process. Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy. At the same time, in order to accelerate the training speed of the network, the global average pooling(GAP) layer is introduced to replace the fully connected layers by the combination of convolution and pooling, so that SNNs can use a small number of time steps to obtain better recognition accuracy. In the extreme case of using only one time step, we still can achieve 92.92 %, 91.63 % ,and 63.54 % testing accuracy on three different datasets, FashionMNIST, CIFAR-10, and CIFAR-100, respectively.
邮件日期 2022年08月02日

517、基于忆阻器的脉冲神经网络文本分类

  • Text Classification in Memristor-based Spiking Neural Networks 时间:2022年07月31日 第一作者:Jinqi Huang 链接.
注释 23 pages, 5 figures
邮件日期 2022年08月02日

516、通过代理训练的脉冲神经网络

  • Spiking neural networks trained via proxy 时间:2022年07月30日 第一作者:Saeed Reza Kheradpisheh 链接.
邮件日期 2022年08月02日

515、基于忆阻器的脉冲神经网络文本分类

  • Text Classification in Memristor-based Spiking Neural Networks 时间:2022年07月27日 第一作者:Jinqi Huang 链接.

摘要:忆阻器是一种新兴的非易失性存储器件,在神经形态硬件设计中显示出了巨大的潜力,特别是在脉冲神经网络(SNN)硬件实现中。基于忆阻器的SNN已成功应用于广泛的各种应用,包括图像分类和模式识别。然而,在文本分类中实现基于记忆的SNN仍在探索中。其中一个主要原因是,由于缺乏有效的学习规则和记忆器的非理想性,训练用于文本分类的基于记忆器的SNN代价高昂。为了解决这些问题并加快在文本分类应用中探索基于忆阻器的脉冲神经网络的研究,我们使用经验忆阻器模型开发了虚拟忆阻器阵列的仿真框架。我们使用这个框架来演示IMDB电影评论数据集中的情感分析任务。我们采用两种方法获得训练的脉冲神经网络

英文摘要 Memristors, emerging non-volatile memory devices, have shown promising potential in neuromorphic hardware designs, especially in spiking neural network (SNN) hardware implementation. Memristor-based SNNs have been successfully applied in a wide range of various applications, including image classification and pattern recognition. However, implementing memristor-based SNNs in text classification is still under exploration. One of the main reasons is that training memristor-based SNNs for text classification is costly due to the lack of efficient learning rules and memristor non-idealities. To address these issues and accelerate the research of exploring memristor-based spiking neural networks in text classification applications, we develop a simulation framework with a virtual memristor array using an empirical memristor model. We use this framework to demonstrate a sentiment analysis task in the IMDB movie reviews dataset. We take two approaches to obtain trained spiking neural networks with memristor models: 1) by converting a pre-trained artificial neural network (ANN) to a memristor-based SNN, or 2) by training a memristor-based SNN directly. These two approaches can be applied in two scenarios: offline classification and online training. We achieve the classification accuracy of 85.88% by converting a pre-trained ANN to a memristor-based SNN and 84.86% by training the memristor-based SNN directly, given that the baseline training accuracy of the equivalent ANN is 86.02%. We conclude that it is possible to achieve similar classification accuracy in simulation from ANNs to SNNs and from non-memristive synapses to data-driven memristive synapses. We also investigate how global parameters such as spike train length, the read noise, and the weight updating stop conditions affect the neural networks in both approaches.
注释 23 pages, 5 figures
邮件日期 2022年07月29日

514、面向低水平人工通用智能的神经进化

  • Towards the Neuroevolution of Low-level Artificial General Intelligence 时间:2022年07月27日 第一作者:Sidney Pontes-Filho 链接.

摘要:在这项工作中,我们认为人工通用智能(AGI)的搜索应该从比人类智能低得多的水平开始。自然界中智能行为的环境是由有机体与其周围环境相互作用产生的,随着时间的推移,可能会发生变化,并对有机体施加压力,以允许学习新的行为或环境模型。我们的假设是,当一个主体在一个环境中行动时,学习是通过解释感觉反馈发生的。要做到这一点,需要一个机构和一个反应环境。我们评估了一种进化从环境反应中学习的生物启发人工神经网络的方法,称为人工通用智能(NAGI)的神经进化,这是一种低水平AGI框架。该方法允许具有自适应突触的随机初始化脉冲神经网络的进化复杂化,该神经网络控制在可变环境中实例化的代理。这种结构

英文摘要 In this work, we argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence. The circumstances of intelligent behavior in nature resulted from an organism interacting with its surrounding environment, which could change over time and exert pressure on the organism to allow for learning of new behaviors or environment models. Our hypothesis is that learning occurs through interpreting sensory feedback when an agent acts in an environment. For that to happen, a body and a reactive environment are needed. We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions named Neuroevolution of Artificial General Intelligence (NAGI), a framework for low-level AGI. This method allows the evolutionary complexification of a randomly-initialized spiking neural network with adaptive synapses, which controls agents instantiated in mutable environments. Such a configuration allows us to benchmark the adaptivity and generality of the controllers. The chosen tasks in the mutable environments are food foraging, emulation of logic gates, and cart-pole balancing. The three tasks are successfully solved with rather small network topologies and therefore it opens up the possibility of experimenting with more complex tasks and scenarios where curriculum learning is beneficial.
注释 18 pages, 14 figures MSC-class: 68T05 ACM-class: I.2.6
邮件日期 2022年07月28日

513、SPAIC:一个基于Spike的人工智能计算框架

  • SPAIC: A Spike-based Artificial Intelligence Computing Framework 时间:2022年07月26日 第一作者:Chaofei Hong 链接.

摘要:神经形态计算是一个新兴的研究领域,旨在通过整合神经科学和深度学习等多学科的理论和技术来开发新的智能系统。目前,已经为相关领域开发了各种软件框架,但缺乏专门用于基于峰值的计算模型和算法的有效框架。在这项工作中,我们提出了一个基于Python的脉冲神经网络(SNN)模拟和训练框架,又名SPAIC,旨在支持结合深度学习和神经科学特征的脑启发模型和算法研究。为了整合这两个压倒性学科的不同方法,平衡灵活性和效率,SPAIC设计了神经科学风格的前端和深度学习后端结构。我们提供了广泛的示例,包括神经电路仿真、深度SNN学习和神经形态应用,演示了

英文摘要 Neuromorphic computing is an emerging research field that aims to develop new intelligent systems by integrating theories and technologies from multi-disciplines such as neuroscience and deep learning. Currently, there have been various software frameworks developed for the related fields, but there is a lack of an efficient framework dedicated for spike-based computing models and algorithms. In this work, we present a Python based spiking neural network (SNN) simulation and training framework, aka SPAIC that aims to support brain-inspired model and algorithm researches integrated with features from both deep learning and neuroscience. To integrate different methodologies from the two overwhelming disciplines, and balance between flexibility and efficiency, SPAIC is designed with neuroscience-style frontend and deep learning backend structure. We provide a wide range of examples including neural circuits Simulation, deep SNN learning and neuromorphic applications, demonstrating the concise coding style and wide usability of our framework. The SPAIC is a dedicated spike-based artificial intelligence computing platform, which will significantly facilitate the design, prototype and validation of new models, theories and applications. Being user-friendly, flexible and high-performance, it will help accelerate the rapid growth and wide applicability of neuromorphic computing research.
注释 This Paper has been submitted to IEEE computational intelligence magazine
邮件日期 2022年07月27日

512、基于神经形态硬件的美国手语静态手势识别

  • Static Hand Gesture Recognition for American Sign Language using Neuromorphic Hardware 时间:2022年07月25日 第一作者:MohammedReza Mohammadi 链接.

摘要:在本文中,我们为两个静态美国手语(ASL)手势分类任务(即ASL字母表和ASL数字)开发了四个脉冲神经网络(SNN)模型。SNN模型部署在Intel的神经形态平台Loihi上,然后与部署在边缘计算设备Intel神经计算棒2(NCS2)上的等效深度神经网络(DNN)模型进行比较。我们在准确性、延迟、功耗和能量方面对两个系统进行了全面比较。最佳DNN模型在ASL字母表数据集上的准确率达到99.6%,而最佳SNN模型的准确率为99.44%。对于ASL数字数据集,最佳SNN模型以99.52%的准确率优于所有DNN模型。此外,我们获得的实验结果表明,与NCS2相比,Loihi神经形态硬件实现的功耗和能量分别减少了14.67x和4.09x。

英文摘要 In this paper, we develop four spiking neural network (SNN) models for two static American Sign Language (ASL) hand gesture classification tasks, i.e., the ASL Alphabet and ASL Digits. The SNN models are deployed on Intel's neuromorphic platform, Loihi, and then compared against equivalent deep neural network (DNN) models deployed on an edge computing device, the Intel Neural Compute Stick 2 (NCS2). We perform a comprehensive comparison between the two systems in terms of accuracy, latency, power consumption, and energy. The best DNN model achieves an accuracy of 99.6% on the ASL Alphabet dataset, whereas the best performing SNN model has an accuracy of 99.44%. For the ASL-Digits dataset, the best SNN model outperforms all of its DNN counterparts with 99.52% accuracy. Moreover, our obtained experimental results show that the Loihi neuromorphic hardware implementations achieve up to 14.67x and 4.09x reduction in power consumption and energy, respectively, when compared to NCS2.
注释 Authors MohammedReza Mohammadi, and Peyton Chandarana contributed equally
邮件日期 2022年07月27日

511、模拟突触间的联想可塑性以增强脉冲神经网络的学习

  • Modeling Associative Plasticity between Synapses to Enhance Learning of Spiking Neural Networks 时间:2022年07月24日 第一作者:Haibo Shen 链接.

摘要:脉冲神经网络(SNN)是第三代人工神经网络,能够在神经形态硬件上实现节能。然而,脉冲的离散传输给鲁棒和高性能学习机制带来了重大挑战。现有的大多数工作只关注神经元之间的学习,而忽略了突触之间的影响,导致鲁棒性和准确性的损失。为了解决这个问题,我们提出了一种鲁棒有效的学习机制,通过模拟突触间的联想可塑性(APB),从联想长时程增强(ALTP)的生理现象中观察到。在所提出的APBS方法中,当其他神经元同时刺激时,同一神经元的突触通过共享因子相互作用。此外,我们提出了一种时空裁剪和翻转(STCF)方法来提高网络的泛化能力。大量实验表明,我们的方法实现了

英文摘要 Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that enable energy-efficient implementation on neuromorphic hardware. However, the discrete transmission of spikes brings significant challenges to the robust and high-performance learning mechanism. Most existing works focus solely on learning between neurons but ignore the influence between synapses, resulting in a loss of robustness and accuracy. To address this problem, we propose a robust and effective learning mechanism by modeling the associative plasticity between synapses (APBS) observed from the physiological phenomenon of associative long-term potentiation (ALTP). With the proposed APBS method, synapses of the same neuron interact through a shared factor when concurrently stimulated by other neurons. In addition, we propose a spatiotemporal cropping and flipping (STCF) method to improve the generalization ability of our network. Extensive experiments demonstrate that our approaches achieve superior performance on static CIFAR-10 datasets and state-of-the-art performance on neuromorphic MNIST-DVS, CIFAR10-DVS datasets by a lightweight convolution network. To our best knowledge, this is the first time to explore a learning method between synapses and an extended approach for neuromorphic data.
注释 Submitted to ijcai2022, rejected
邮件日期 2022年07月26日

510、位置刺激神经元的事件驱动触觉学习

  • Event-Driven Tactile Learning with Location Spiking Neurons 时间:2022年07月23日 第一作者:Peng Kang 链接.

摘要:触觉对于各种日常任务至关重要。基于事件的触觉传感器和脉冲神经网络(SNN)的新进展推动了事件驱动触觉学习的研究。然而,由于现有脉冲神经元的代表性能力有限以及数据的时空复杂性,SNN激活的事件驱动触觉学习仍处于初级阶段。在本文中,为了提高现有脉冲神经元的代表性能力,我们提出了一种新的神经元模型,称为“位置脉冲神经元”,它使我们能够以新的方式提取基于事件的数据的特征。此外,在经典的时间脉冲响应模型(TSRM)的基础上,我们开发了一种特定的位置脉冲神经元模型-位置脉冲响应(LSRM),作为SNN的新构建块。此外,我们提出了一种混合模型,将SNN与TSRM神经元和SNN与LSRM神经元相结合,以捕获数据中复杂的时空相关性。广阔的

英文摘要 The sense of touch is essential for a variety of daily tasks. New advances in event-based tactile sensors and Spiking Neural Networks (SNNs) spur the research in event-driven tactile learning. However, SNN-enabled event-driven tactile learning is still in its infancy due to the limited representative abilities of existing spiking neurons and high spatio-temporal complexity in the data. In this paper, to improve the representative capabilities of existing spiking neurons, we propose a novel neuron model called "location spiking neuron", which enables us to extract features of event-based data in a novel way. Moreover, based on the classical Time Spike Response Model (TSRM), we develop a specific location spiking neuron model - Location Spike Response Model (LSRM) that serves as a new building block of SNNs. Furthermore, we propose a hybrid model which combines an SNN with TSRM neurons and an SNN with LSRM neurons to capture the complex spatio-temporal dependencies in the data. Extensive experiments demonstrate the significant improvements of our models over other works on event-driven tactile learning and show the superior energy efficiency of our models and location spiking neurons, which may unlock their potential on neuromorphic hardware.
注释 accepted by IJCNN 2022 (oral), the source code is available at https://github.com/pkang2017/TactileLocNeurons
邮件日期 2022年09月05日

509、NeuroHSMD:神经形态混合脉冲运动检测器

  • NeuroHSMD: Neuromorphic Hybrid Spiking Motion Detector 时间:2022年07月22日 第一作者:Pedro Machado 链接.
邮件日期 2022年07月25日

508、脉冲神经网络中彩票假设的探索

  • Exploring Lottery Ticket Hypothesis in Spiking Neural Networks 时间:2022年07月20日 第一作者:Youngeun Kim 链接.
注释 Accepted to European Conference on Computer Vision (ECCV) 2022
邮件日期 2022年07月22日

507、脉冲神经网络的神经结构搜索

  • Neural Architecture Search for Spiking Neural Networks 时间:2022年07月20日 第一作者:Youngeun Kim 链接.
注释 Accepted to European Conference on Computer Vision (ECCV) 2022
邮件日期 2022年07月22日

506、一种基于时间和空间局部脉冲的反向传播算法,用于硬件训练

  • A Temporally and Spatially Local Spike-based Backpropagation Algorithm to Enable Training in Hardware 时间:2022年07月20日 第一作者:Anmol Biswas 链接.

摘要:脉冲神经网络(SNN)已成为分类任务的硬件高效架构。基于脉冲的编码的缺点是缺乏完全使用脉冲执行的通用训练机制。已经有几次尝试采用非脉冲人工神经网络(ANN)中使用的强大反向传播(BP)技术:(1)SNN可以通过外部计算的数值梯度进行训练。(2) 基于自然脉冲的学习的一个主要进展是使用了近似反向传播,使用脉冲时间相关塑性(STDP)和分阶段的前/后向传递。然而,这些阶段之间的信息传输需要外部存储器和计算访问。这是神经形态硬件实现的挑战。在本文中,我们提出了一种基于随机SNN的反向支持(SSNN-BP)算法,该算法利用一个复合神经元同时计算前向通过激活和后向通过梯度

英文摘要 Spiking Neural Networks (SNNs) have emerged as a hardware efficient architecture for classification tasks. The penalty of spikes-based encoding has been the lack of a universal training mechanism performed entirely using spikes. There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANN): (1) SNNs can be trained by externally computed numerical gradients. (2) A major advancement toward native spike-based learning has been the use of approximate Backpropagation using spike-time-dependent plasticity (STDP) with phased forward/backward passes. However, the transfer of information between such phases necessitates external memory and computational access. This is a challenge for neuromorphic hardware implementations. In this paper, we propose a stochastic SNN-based Back-Prop (SSNN-BP) algorithm that utilizes a composite neuron to simultaneously compute the forward pass activations and backward pass gradients explicitly with spikes. Although signed gradient values are a challenge for spike-based representation, we tackle this by splitting the gradient signal into positive and negative streams. The composite neuron encodes information in the form of stochastic spike-trains and converts Backpropagation weight updates into temporally and spatially local discrete STDP-like spike coincidence updates compatible with hardware-friendly Resistive Processing Units (RPUs). Furthermore, our method approaches BP ANN baseline with sufficiently long spike-trains. Finally, we show that softmax cross-entropy loss function can be implemented through inhibitory lateral connections enforcing a Winner Take All (WTA) rule. Our SNN shows excellent generalization through comparable performance to ANNs on the MNIST, Fashion-MNIST and Extended MNIST datasets. Thus, SSNN-BP enables BP compatible with purely spike-based neuromorphic hardware.
邮件日期 2022年07月21日

505、用于训练脉冲神经网络的神经形态数据扩充

  • Neuromorphic Data Augmentation for Training Spiking Neural Networks 时间:2022年07月20日 第一作者:Yuhang Li 链接.
注释 Accepted to the 17th European Conference on Computer Vision (ECCV 2022)
邮件日期 2022年07月21日

504、用于自然视觉图像重建的脑激励解码器

  • The Brain-Inspired Decoder for Natural Visual Image Reconstruction 时间:2022年07月18日 第一作者:Wenyi Li 链接.

摘要:从大脑活动中解码图像一直是一个挑战。由于深度学习的发展,有可用的工具来解决这个问题。解码图像,其目的是将神经脉冲训练映射到低级视觉特征和高级语义信息空间。最近,有一些关于从棘波序列解码的研究,然而,这些研究较少关注神经科学的基础,并且很少有研究将感受野合并到视觉图像重建中。在本文中,我们提出了一种具有生物学特性的深度学习神经网络结构,用于从脉冲序列重建视觉图像。据我们所知,我们第一次实现了将接收场特性矩阵整合到损失函数中的方法。我们的模型是从神经脉冲序列到图像的端到端解码器。我们不仅将Gabor滤波器合并到用于生成图像的自动编码器中,还提出了一种具有感受野特性的损失函数。

英文摘要 Decoding images from brain activity has been a challenge. Owing to the development of deep learning, there are available tools to solve this problem. The decoded image, which aims to map neural spike trains to low-level visual features and high-level semantic information space. Recently, there are a few studies of decoding from spike trains, however, these studies pay less attention to the foundations of neuroscience and there are few studies that merged receptive field into visual image reconstruction. In this paper, we propose a deep learning neural network architecture with biological properties to reconstruct visual image from spike trains. As far as we know, we implemented a method that integrated receptive field property matrix into loss function at the first time. Our model is an end-to-end decoder from neural spike trains to images. We not only merged Gabor filter into auto-encoder which used to generate images but also proposed a loss function with receptive field properties. We evaluated our decoder on two datasets which contain macaque primary visual cortex neural spikes and salamander retina ganglion cells (RGCs) spikes. Our results show that our method can effectively combine receptive field features to reconstruct images, providing a new approach to visual reconstruction based on neural information.
邮件日期 2022年07月19日

503、BrainCog:一个基于脉冲神经网络的脑启发认知智能引擎,用于脑启发人工智能和脑模拟

  • BrainCog: A Spiking Neural Network based Brain-inspired Cognitive Intelligence Engine for Brain-inspired AI and Brain Simulation 时间:2022年07月18日 第一作者:Yi Zeng 链接.

摘要:脉冲神经网络(SNN)在脑启发人工智能和计算神经科学中引起了广泛关注。它们可以用于在多个尺度上模拟大脑中的生物信息处理。更重要的是,SNN作为一个适当的抽象层次,将大脑和认知的灵感引入人工智能。在本文中,我们提出了脑启发认知智能引擎(BrainCog),用于创建脑启发人工智能和脑模拟模型。BrainCog整合了不同类型的脉冲神经元模型、学习规则、大脑区域等,作为平台提供的基本模块。基于这些易于使用的模块,BrainCog支持各种大脑启发的认知功能,包括感知和学习、决策、知识表示和推理、运动控制和社会认知。这些受大脑启发的人工智能模型已在各种有监督、无监督、有监督和无监督的情况下得到有效验证

英文摘要 Spiking neural networks (SNNs) have attracted extensive attentions in Brain-inspired Artificial Intelligence and computational neuroscience. They can be used to simulate biological information processing in the brain at multiple scales. More importantly, SNNs serve as an appropriate level of abstraction to bring inspirations from brain and cognition to Artificial Intelligence. In this paper, we present the Brain-inspired Cognitive Intelligence Engine (BrainCog) for creating brain-inspired AI and brain simulation models. BrainCog incorporates different types of spiking neuron models, learning rules, brain areas, etc., as essential modules provided by the platform. Based on these easy-to-use modules, BrainCog supports various brain-inspired cognitive functions, including Perception and Learning, Decision Making, Knowledge Representation and Reasoning, Motor Control, and Social Cognition. These brain-inspired AI models have been effectively validated on various supervised, unsupervised, and reinforcement learning tasks, and they can be used to enable AI models to be with multiple brain-inspired cognitive functions. For brain simulation, BrainCog realizes the function simulation of decision-making, working memory, the structure simulation of the Neural Circuit, and whole brain structure simulation of Mouse brain, Macaque brain, and Human brain. An AI engine named BORN is developed based on BrainCog, and it demonstrates how the components of BrainCog can be integrated and used to build AI models and applications. To enable the scientific quest to decode the nature of biological intelligence and create AI, BrainCog aims to provide essential and easy-to-use building blocks, and infrastructural support to develop brain-inspired spiking neural network based AI, and to simulate the cognitive brains at multiple scales. The online repository of BrainCog can be found at https://github.com/braincog-x.
邮件日期 2022年07月19日

502、神经形态语音识别的有效脉冲编码算法

  • Efficient spike encoding algorithms for neuromorphic speech recognition 时间:2022年07月14日 第一作者:Sidi Yaya Arnaud Yarga 链接.

摘要:已知脉冲神经网络(SNN)对于神经形态处理器的实现非常有效,与传统深度学习方法相比,在能量效率和计算延迟方面实现了数量级的改进。最近,随着监督训练算法适应SNN环境,可比较的算法性能也成为可能。然而,包括音频、视频和其他传感器衍生数据的信息通常被编码为不适合SNN的实值信号,从而防止网络利用脉冲定时信息。因此,从实值信号到脉冲的有效编码是关键的,并且显著影响整个系统的性能。为了有效地将信号编码成脉冲,必须考虑与手头任务相关的信息的保存以及编码脉冲的密度。在本文中,我们研究了说话人背景下的四种脉冲编码方法

英文摘要 Spiking Neural Networks (SNN) are known to be very effective for neuromorphic processor implementations, achieving orders of magnitude improvements in energy efficiency and computational latency over traditional deep learning approaches. Comparable algorithmic performance was recently made possible as well with the adaptation of supervised training algorithms to the context of SNN. However, information including audio, video, and other sensor-derived data are typically encoded as real-valued signals that are not well-suited to SNN, preventing the network from leveraging spike timing information. Efficient encoding from real-valued signals to spikes is therefore critical and significantly impacts the performance of the overall system. To efficiently encode signals into spikes, both the preservation of information relevant to the task at hand as well as the density of the encoded spikes must be considered. In this paper, we study four spike encoding methods in the context of a speaker independent digit classification system: Send on Delta, Time to First Spike, Leaky Integrate and Fire Neuron and Bens Spiker Algorithm. We first show that all encoding methods yield higher classification accuracy using significantly fewer spikes when encoding a bio-inspired cochleagram as opposed to a traditional short-time Fourier transform. We then show that two Send On Delta variants result in classification results comparable with a state of the art deep convolutional neural network baseline, while simultaneously reducing the encoded bit rate. Finally, we show that several encoding methods result in improved performance over the conventional deep learning baseline in certain cases, further demonstrating the power of spike encoding algorithms in the encoding of real-valued signals and that neuromorphic implementation has the potential to outperform state of the art techniques.
注释 Accepted to International Conference on Neuromorphic Systems (ICONS 2022) DOI: 10.1145/3546790.3546803
邮件日期 2022年07月15日

501、用时间(脉冲)神经元实现的宏列结构

  • A Macrocolumn Architecture Implemented with Temporal (Spiking) Neurons 时间:2022年07月11日 第一作者:James E. Smith 链接.

摘要:由于长期目标是自下而上逆向架构计算大脑,因此本文的重点是宏列抽象层。通过首先用状态机模型描述其操作,开发了基本的宏列架构。然后用支持时间计算的脉冲神经元实现状态机函数。神经元模型基于活跃的脉冲树突,反映了Hawkins/Numenta神经元模型。该架构通过一个研究基准进行了演示,其中代理使用宏列首先学习,然后导航包含随机放置特征的二维环境。环境在宏列中表示为带标签的有向图,其中边连接特征,标签表示它们之间的相对位移。

英文摘要 With the long-term goal of reverse-architecting the computational brain from the bottom up, the focus of this document is the macrocolumn abstraction layer. A basic macrocolumn architecture is developed by first describing its operation with a state machine model. Then state machine functions are implemented with spiking neurons that support temporal computation. The neuron model is based on active spiking dendrites and mirrors the Hawkins/Numenta neuron model. The architecture is demonstrated with a research benchmark in which an agent uses a macrocolumn to first learn and then navigate 2-d environments containing randomly placed features. Environments are represented in the macrocolumn as labeled directed graphs where edges connect features and labels indicate the relative displacements between them.
邮件日期 2022年07月13日

500、用于常识知识表示和推理的脑激励图形脉冲神经网络

  • Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge Representation and Reasoning 时间:2022年07月11日 第一作者:Hongjian Fang 链接.

摘要:人脑中的神经网络如何代表常识知识,并完成相关推理任务,是神经科学、认知科学、心理学和人工智能领域的重要研究课题。尽管使用固定长度向量表示符号的传统人工神经网络在某些特定任务中取得了良好的性能,但它仍然是一个缺乏可解释性的黑盒子,与人类如何感知世界相去甚远。受神经科学中祖母细胞假说的启发,这项工作研究了群体编码和脉冲时间依赖性可塑性(STDP)机制如何整合到脉冲神经网络的学习中,以及神经元群体如何通过引导不同神经元群体之间的顺序放电完成来表示符号。不同群体的神经元群体共同构成了整个常识知识图,形成了一个巨大的图脉冲神经网络。此外,我们和我

英文摘要 How neural networks in the human brain represent commonsense knowledge, and complete related reasoning tasks is an important research topic in neuroscience, cognitive science, psychology, and artificial intelligence. Although the traditional artificial neural network using fixed-length vectors to represent symbols has gained good performance in some specific tasks, it is still a black box that lacks interpretability, far from how humans perceive the world. Inspired by the grandmother-cell hypothesis in neuroscience, this work investigates how population encoding and spiking timing-dependent plasticity (STDP) mechanisms can be integrated into the learning of spiking neural networks, and how a population of neurons can represent a symbol via guiding the completion of sequential firing between different neuron populations. The neuron populations of different communities together constitute the entire commonsense knowledge graph, forming a giant graph spiking neural network. Moreover, we introduced the Reward-modulated spiking timing-dependent plasticity (R-STDP) mechanism to simulate the biological reinforcement learning process and completed the related reasoning tasks accordingly, achieving comparable accuracy and faster convergence speed than the graph convolutional artificial neural networks. For the fields of neuroscience and cognitive science, the work in this paper provided the foundation of computational modeling for further exploration of the way the human brain represents commonsense knowledge. For the field of artificial intelligence, this paper indicated the exploration direction for realizing a more robust and interpretable neural network by constructing a commonsense knowledge representation and reasoning spiking neural networks with solid biological plausibility.
邮件日期 2022年07月13日

499、BioLCNet:奖励调制局部连接脉冲神经网络

  • BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks 时间:2022年07月07日 第一作者:Hafez Ghaemi 链接.
注释 15 pages, 6 figures ACM-class: I.2.6; I.5.1
邮件日期 2022年07月08日

498、脉冲校准:用于目标检测和分割的脉冲神经网络的快速准确转换

  • Spike Calibration: Fast and Accurate Conversion of Spiking Neural Network for Object Detection and Segmentation 时间:2022年07月06日 第一作者:Yang Li 链接.

摘要:脉冲神经网络(SNN)由于在神经形态硬件上具有高生物似然性和低能量消耗的特性而受到高度重视。作为获得深度SNN的有效方法,该转换方法在各种大规模数据集上表现出了高性能。然而,它通常遭受严重的性能降级和高时间延迟。特别是,以前的大多数工作集中于简单的分类任务,而忽略了神经网络输出的精确近似。在本文中,我们首先从理论上分析了转换误差,并推导了时变极值对突触电流的有害影响。我们提出了脉冲校准(SpiCalib)以消除离散脉冲对输出分布的损害,并修改Lipoooling以允许任意最大池层的无损转换。此外,提出了最佳归一化参数的贝叶斯优化,以避免经验设置。经验

英文摘要 Spiking neural network (SNN) has been attached to great importance due to the properties of high biological plausibility and low energy consumption on neuromorphic hardware. As an efficient method to obtain deep SNN, the conversion method has exhibited high performance on various large-scale datasets. However, it typically suffers from severe performance degradation and high time delays. In particular, most of the previous work focuses on simple classification tasks while ignoring the precise approximation to ANN output. In this paper, we first theoretically analyze the conversion errors and derive the harmful effects of time-varying extremes on synaptic currents. We propose the Spike Calibration (SpiCalib) to eliminate the damage of discrete spikes to the output distribution and modify the LIPooling to allow conversion of the arbitrary MaxPooling layer losslessly. Moreover, Bayesian optimization for optimal normalization parameters is proposed to avoid empirical settings. The experimental results demonstrate the state-of-the-art performance on classification, object detection, and segmentation tasks. To the best of our knowledge, this is the first time to obtain SNN comparable to ANN on these tasks simultaneously. Moreover, we only need 1/50 inference time of the previous work on the detection task and can achieve the same performance under 0.492$\times$ energy consumption of ANN on the segmentation task.
邮件日期 2022年07月07日

497、一种受生物上合理的学习规则和连接启发的无监督脉冲神经网络

  • An Unsupervised Spiking Neural Network Inspired By Biologically Plausible Learning Rules and Connections 时间:2022年07月06日 第一作者:Yiting Dong 链接.

摘要:反向传播算法促进了深度学习的快速发展,但它依赖于大量的标记数据,与人类的学习方式还有很大差距。人脑可以以自组织和无监督的方式快速学习各种概念知识,这是通过协调人脑中的多个学习规则和结构来实现的。脉冲时间依赖性可塑性(STDP)是大脑中广泛存在的学习规则,但单独使用STDP训练的脉冲神经网络效率低且性能差。本文受短期突触可塑性的启发,设计了一种自适应突触滤波器,并引入自适应阈值平衡作为神经元可塑性,以丰富SNN的表达能力。我们还引入了自适应横向抑制连接来动态调整脉冲平衡,以帮助网络学习更丰富的特征。加快和稳定

英文摘要 The backpropagation algorithm has promoted the rapid development of deep learning, but it relies on a large amount of labeled data, and there is still a large gap with the way the human learns. The human brain can rapidly learn various concept knowledge in a self-organized and unsupervised way, which is accomplished through the coordination of multiple learning rules and structures in the human brain. Spike-timing-dependent plasticity (STDP) is a widespread learning rule in the brain, but spiking neural network trained using STDP alone are inefficient and performs poorly. In this paper, taking inspiration from the short-term synaptic plasticity, we design an adaptive synaptic filter, and we introduce the adaptive threshold balance as the neuron plasticity to enrich the representation ability of SNNs. We also introduce an adaptive lateral inhibitory connection to dynamically adjust the spikes balance to help the network learn richer features. To accelerate and stabilize the training of the unsupervised spiking neural network, we design a sample temporal batch STDP which update the weight based on multiple samples and multiple moments. We have conducted experiments on MNIST and FashionMNIST, and have achieved state-of-the-art performance of the current unsupervised spiking neural network based on STDP. And our model also shows strong superiority in small samples learning.
邮件日期 2022年07月07日

496、神经网络中的彩票假设

  • Lottery Ticket Hypothesis for Spiking Neural Networks 时间:2022年07月04日 第一作者:Youngeun Kim 链接.

摘要:脉冲神经网络(SNN)最近作为新一代低功耗深度神经网络出现,其中二进制脉冲在多个时间步长上传递信息。当SNN部署在资源受限的移动/边缘设备上时,对SNN的修剪非常重要。以前的SNN修剪工作集中于浅SNN(2~6层),然而,最先进的SNN工作提出了更深的SNN(>16层),这很难与当前的修剪工作兼容。为了向深度SNN扩展剪枝技术,我们研究了彩票假设(LTH),该假设指出,密集网络包含较小的子网络(即中奖彩票),其性能与密集网络相当。我们对LTH的研究表明,中奖彩票始终存在于各种数据集和架构的深度SNN中,提供了高达97%的稀疏性,而不会出现巨大的性能下降。然而,LTH的迭代搜索过程带来了巨大的训练成本

英文摘要 Spiking Neural Networks (SNNs) have recently emerged as a new generation of low-power deep neural networks where binary spikes convey information across multiple timesteps. Pruning for SNNs is highly important as they become deployed on a resource-constraint mobile/edge device. The previous SNN pruning works focus on shallow SNNs (2~6 layers), however, deeper SNNs (>16 layers) are proposed by state-of-the-art SNN works, which is difficult to be compatible with the current pruning work. To scale up a pruning technique toward deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i.e., winning tickets) that achieve comparable performance to the dense networks. Our studies on LTH reveal that the winning tickets consistently exist in deep SNNs across various datasets and architectures, providing up to 97% sparsity without huge performance degradation. However, the iterative searching process of LTH brings a huge training computational cost when combined with the multiple timesteps of SNNs. To alleviate such heavy searching cost, we propose Early-Time (ET) ticket where we find the important weight connectivity from a smaller number of timesteps. The proposed ET ticket can be seamlessly combined with common pruning techniques for finding winning tickets, such as Iterative Magnitude Pruning (IMP) and Early-Bird (EB) tickets. Our experiment results show that the proposed ET ticket reduces search time by up to 38% compared to IMP or EB methods.
注释 Accepted to European Conference on Computer Vision (ECCV) 2022
邮件日期 2022年07月05日

495、简单和复杂的脉冲神经元:简单STDP场景中的透视和分析

  • Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario 时间:2022年06月28日 第一作者:Davide Liberato Manna 链接.

摘要:脉冲神经网络(SNN)在很大程度上受到生物学和神经科学的启发,并利用**和理论来创建快速高效的学习系统。脉冲神经元模型被用作神经形态系统的核心处理单元,因为它们支持基于事件的处理。通常采用集成和火灾(I&F)模型,其中最常用的是简单泄漏I&F(LIF)。采用这种模型的原因是它们的效率和/或生物学合理性。然而,在人工学习系统中采用LIF优于其他神经元模型的严格理由尚未研究。这项工作考虑了文献中的各种神经元模型,然后选择单变量、高效且显示不同类型复杂性的计算神经元模型。从这一选择中,我们对三种简单的I&F神经元模型,即LIF、二次I&F(QIF)和指数I&F,进行了比较研究,以了解是否使用

英文摘要 Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I&F) models are often adopted, with the simple Leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer Spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system's performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available.
邮件日期 2022年07月12日

494、脉冲神经网络的结构稳定性

  • Structural Stability of Spiking Neural Networks 时间:2022年06月21日 第一作者:G. Zhang 链接.

摘要:在过去的几十年中,由于对时间相关数据建模的巨大潜力,人们对脉冲神经网络(SNN)越来越感兴趣。已经开发了许多算法和技术;然而,对脉冲神经网络的许多方面的理论理解仍然模糊。最近的一项研究[Zhang等人,2021]揭示,由于其分叉动力学,典型SNN很难承受内部和外部扰动,并建议必须添加自连接。在本文中,我们研究了具有自连接的SNN的理论性质,并通过指定最大分岔解数的下界和上界来深入分析结构稳定性。在模拟和实际任务上进行的数值实验证明了所提出结果的有效性。

英文摘要 The past decades have witnessed an increasing interest in spiking neural networks (SNNs) due to their great potential of modeling time-dependent data. Many algorithms and techniques have been developed; however, theoretical understandings of many aspects of spiking neural networks are still cloudy. A recent work [Zhang et al. 2021] disclosed that typical SNNs could hardly withstand both internal and external perturbations due to their bifurcation dynamics and suggested that self-connection has to be added. In this paper, we investigate the theoretical properties of SNNs with self-connection, and develop an in-depth analysis on structural stability by specifying the lower and upper bounds of the maximum number of bifurcation solutions. Numerical experiments conducted on simulation and practical tasks demonstrate the effectiveness of the proposed results.
邮件日期 2022年07月12日

493、基于线性泄漏积分和激发神经元模型的脉冲神经网络及其与深度神经网络的映射关系

  • Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural Networks and Its Mapping Relationship to Deep Neural Networks 时间:2022年05月31日 第一作者:Sijia Lu 链接.

摘要:脉冲神经网络(SNN)是一种受大脑启发的机器学习算法,具有生物学合理性和无监督学习能力等优点。先前的工作已经表明,将人工神经网络(ANN)转换为SNN是实现SNN的实用和有效的方法。然而,缺乏训练非精确损失SNN的基本原理和理论基础。本文建立了线性泄漏积分和火灾模型(LIF)/SNN的生物参数与ReLU AN/Deep神经网络(DNN)参数之间的精确数学映射。这种映射关系在一定条件下得到了分析证明,并通过模拟和实际数据实验进行了验证。它可以作为两类神经网络各自优点的潜在组合的理论基础。

英文摘要 Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability. Previous works have shown that converting Artificial Neural Networks (ANNs) into SNNs is a practical and efficient approach for implementing an SNN. However, the basic principle and theoretical groundwork are lacking for training a non-accuracy-loss SNN. This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs). Such mapping relationship is analytically proven under certain conditions and demonstrated by simulation and real data experiments. It can serve as the theoretical basis for the potential combination of the respective merits of the two categories of neural networks.
邮件日期 2022年07月12日

492、为什么医疗保健需要可解释的人工智能

  • Why we do need Explainable AI for Healthcare 时间:2022年06月30日 第一作者:Giovanni Cin`a 链接.

摘要:最近,用于医疗保健的人工智能(AI)认证工具激增,重新引发了关于采用这项技术的辩论。这种争论的一个线索涉及可解释的人工智能及其使人工智能设备更透明和更可信的承诺。一些活跃在医学人工智能领域的声音对可解释人工智能技术的可靠性表示担忧,质疑其使用和纳入指南和标准。回顾这些批评,本文就可解释人工智能的效用提供了一个平衡和全面的视角,重点关注人工智能临床应用的特殊性,并将其放在医疗干预的背景下。针对其批评者,尽管存在合理的担忧,我们认为可解释的人工智能研究项目仍然是人机交互的核心,最终是我们防止失控的主要工具,这种危险仅靠严格的临床验证是无法预防的。

英文摘要 The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI and its promise to render AI devices more transparent and trustworthy. A few voices active in the medical AI space have expressed concerns on the reliability of Explainable AI techniques, questioning their use and inclusion in guidelines and standards. Revisiting such criticisms, this article offers a balanced and comprehensive perspective on the utility of Explainable AI, focusing on the specificity of clinical applications of AI and placing them in the context of healthcare interventions. Against its detractors and despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction and ultimately our main tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.
邮件日期 2022年07月01日

491、CIRDataset:用于临床可解释的肺结节放射组学和恶性肿瘤预测的大规模数据集

  • CIRDataset: A large-scale Dataset for Clinically-Interpretable lung nodule Radiomics and malignancy prediction 时间:2022年06月29日 第一作者:Wookjin Choi 链接.

摘要:针状/分叶状、肺结节表面尖锐/弯曲的脉冲是肺癌恶性的良好预测因子,因此,放射科医生定期评估和报告,作为标准化肺RADS临床评分标准的一部分。考虑到结节的三维几何形状和放射科医生逐层二维评估,手动针状/分叶注释是一项繁琐的任务,因此目前还没有公共数据集用于探讨这些临床报告特征在SOTA恶性肿瘤预测算法中的重要性。作为本文的一部分,我们发布了一个大规模临床可解释的放射组学数据集CIRDataset,其中包含来自两个公共数据集LIDC-IDRI(N=883)和LUNGx(N=73)的956个放射科医生对分割肺结节的QA/QC'ed针状/分叶注释。我们还提出了一种基于多类体素网格扩展的端到端深度学习模型,用于分割结节(同时保留脉冲),分类脉冲(尖锐/脉冲和弯曲/分叶状),并进行恶性肿瘤预测。以前的方法已经对LIDC和LUNGx数据集进行了恶性肿瘤预测,但没有对任何临床报告/可操作的特征进行可靠归因(由于一般归因方案存在已知的超参数敏感性问题)。随着这一全面注释的CIRDataset和端到端深度学习基线的发布,我们希望恶性肿瘤预测方法能够验证其解释,对照我们的基线进行基准测试,并提供临床可操作的见解。数据集、代码、预训练模型和docker容器可在https://github.com/nadeemlab/CIR.

英文摘要 Spiculations/lobulations, sharp/curved spikes on the surface of lung nodules, are good predictors of lung cancer malignancy and hence, are routinely assessed and reported by radiologists as part of the standardized Lung-RADS clinical scoring criteria. Given the 3D geometry of the nodule and 2D slice-by-slice assessment by radiologists, manual spiculation/lobulation annotation is a tedious task and thus no public datasets exist to date for probing the importance of these clinically-reported features in the SOTA malignancy prediction algorithms. As part of this paper, we release a large-scale Clinically-Interpretable Radiomics Dataset, CIRDataset, containing 956 radiologist QA/QC'ed spiculation/lobulation annotations on segmented lung nodules from two public datasets, LIDC-IDRI (N=883) and LUNGx (N=73). We also present an end-to-end deep learning model based on multi-class Voxel2Mesh extension to segment nodules (while preserving spikes), classify spikes (sharp/spiculation and curved/lobulation), and perform malignancy prediction. Previous methods have performed malignancy prediction for LIDC and LUNGx datasets but without robust attribution to any clinically reported/actionable features (due to known hyperparameter sensitivity issues with general attribution schemes). With the release of this comprehensively-annotated CIRDataset and end-to-end deep learning baseline, we hope that malignancy prediction methods can validate their explanations, benchmark against our baseline, and provide clinically-actionable insights. Dataset, code, pretrained models, and docker containers are available at https://github.com/nadeemlab/CIR.
注释 MICCAI 2022
邮件日期 2022年07月01日

490、RISP的情况:减少指令脉冲处理器

  • The Case for RISP: A Reduced Instruction Spiking Processor 时间:2022年06月28日 第一作者:James S. Plank 链接.

摘要:本文介绍了精简指令脉冲处理器RISP。虽然大多数脉冲神经处理器基于大脑,或来自大脑的概念,但我们提出了一种简化而非复杂的脉冲处理器。因此,它具有离散集成周期、可配置泄漏等特点。我们提出了RISP的计算模型,并强调了其简单性的优点。我们演示了它如何帮助开发用于简单计算任务的手工构建的神经网络,详细介绍了如何使用它来简化使用更复杂的机器学习技术构建的神经网络,并演示了它的性能如何与其他脉冲神经处理器类似。

英文摘要 In this paper, we introduce RISP, a reduced instruction spiking processor. While most spiking neuroprocessors are based on the brain, or notions from the brain, we present the case for a spiking processor that simplifies rather than complicates. As such, it features discrete integration cycles, configurable leak, and little else. We present the computing model of RISP and highlight the benefits of its simplicity. We demonstrate how it aids in developing hand built neural networks for simple computational tasks, detail how it may be employed to simplify neural networks built with more complicated machine learning techniques, and demonstrate how it performs similarly to other spiking neurprocessors.
注释 5 pages, 5 figures
邮件日期 2022年06月29日

489、短时可塑性神经元学习和遗忘

  • Short-Term Plasticity Neurons Learning to Learn and Forget 时间:2022年06月28日 第一作者:Hector Garcia Rodriguez 链接.

摘要:短时可塑性(STP)是一种在大脑皮层突触中储存衰退记忆的机制。在计算实践中,虽然理论预测STP是某些动态任务的最佳解决方案,但它主要用于脉冲神经元的小生境。在这里,我们提出了一种新型的递归神经单元,STP神经元(STPN),它确实非常强大。其关键机制是突触具有一种状态,通过突触内的自循环连接在时间中传播。这种公式可以通过时间反向传播来训练可塑性,从而形成一种短期内学会学习和忘记的形式。STPN优于所有测试的替代方案,即RNN、LSTM、其他具有快速权重和可微塑性的模型。我们在监督学习和强化学习(RL)以及联想检索、迷宫探索、雅达利视频游戏和MuJoCo机器人等任务中都证实了这一点。此外,我们计算出,在神经形态或生物电路中,STPN最大限度地减少了跨模型的能量消耗,因为它动态抑制了单个突触。基于这些,生物STP可能是一个强大的进化吸引子,可以最大限度地提高效率和计算能力。STPN现在也为广泛的机器学习实践带来了这些神经形态的优势。代码位于https://github.com/NeuromorphicComputing/stpn

英文摘要 Short-term plasticity (STP) is a mechanism that stores decaying memories in synapses of the cerebral cortex. In computing practice, STP has been used, but mostly in the niche of spiking neurons, even though theory predicts that it is the optimal solution to certain dynamic tasks. Here we present a new type of recurrent neural unit, the STP Neuron (STPN), which indeed turns out strikingly powerful. Its key mechanism is that synapses have a state, propagated through time by a self-recurrent connection-within-the-synapse. This formulation enables training the plasticity with backpropagation through time, resulting in a form of learning to learn and forget in the short term. The STPN outperforms all tested alternatives, i.e. RNNs, LSTMs, other models with fast weights, and differentiable plasticity. We confirm this in both supervised and reinforcement learning (RL), and in tasks such as Associative Retrieval, Maze Exploration, Atari video games, and MuJoCo robotics. Moreover, we calculate that, in neuromorphic or biological circuits, the STPN minimizes energy consumption across models, as it depresses individual synapses dynamically. Based on these, biological STP may have been a strong evolutionary attractor that maximizes both efficiency and computational power. The STPN now brings these neuromorphic advantages also to a broad spectrum of machine learning practice. Code is available at https://github.com/NeuromorphicComputing/stpn
注释 Accepted at ICML 2022 Journal-ref: Proceedings of the 39th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022
邮件日期 2022年06月29日

488、利用脉冲神经网络中的神经调制突触可塑性学习在线学习

  • Learning to learn online with neuromodulated synaptic plasticity in spiking neural networks 时间:2022年06月28日 第一作者:Samuel Schmidgall 链接.
邮件日期 2022年06月29日

487、脉冲神经网络的能量有效知识提取

  • Energy-efficient Knowledge Distillation for Spiking Neural Networks 时间:2022年06月27日 第一作者:Dongjin Lee 链接.
注释 The manuscript was withdrawn because it contains inappropriate content for posting
邮件日期 2022年06月28日

486、动态RRAM阵列上基于梯度的神经形态学习

  • Gradient-based Neuromorphic Learning on Dynamical RRAM Arrays 时间:2022年06月26日 第一作者:Peng Zhou 链接.

摘要:我们提出了MEMprop,即采用基于梯度的学习来训练全记忆脉冲神经网络(MSNN)。我们的方法利用固有的器件动态来触发自然产生的电压脉冲。忆阻动力学发出的这些脉冲本质上是模拟的,因此是完全可微的,这消除了对脉冲神经网络(SNN)文献中流行的替代梯度方法的需要。忆阻神经网络通常要么将忆阻器集成为映射离线训练网络的突触,要么依赖联想学习机制来训练忆阻神经元网络。相反,我们将时间反向传播(BPTT)训练算法直接应用于记忆神经元和突触的模拟SPICE模型。我们的实现是完全记忆的,因为突触权重和脉冲神经元都集成在电阻RAM(RRAM)阵列上,而不需要额外的电路来实现脉冲动态,例如模数转换器(ADC)或阈值比较器。因此,高阶电生理效应被充分利用,以在运行时使用记忆神经元的状态驱动动力学。通过转向基于非近似梯度的学习,我们在之前报告的轻量级密集完全多通道神经网络中获得了在多个基准上具有高度竞争力的准确性。

英文摘要 We present MEMprop, the adoption of gradient-based learning to train fully memristive spiking neural networks (MSNNs). Our approach harnesses intrinsic device dynamics to trigger naturally arising voltage spikes. These spikes emitted by memristive dynamics are analog in nature, and thus fully differentiable, which eliminates the need for surrogate gradient methods that are prevalent in the spiking neural network (SNN) literature. Memristive neural networks typically either integrate memristors as synapses that map offline-trained networks, or otherwise rely on associative learning mechanisms to train networks of memristive neurons. We instead apply the backpropagation through time (BPTT) training algorithm directly on analog SPICE models of memristive neurons and synapses. Our implementation is fully memristive, in that synaptic weights and spiking neurons are both integrated on resistive RAM (RRAM) arrays without the need for additional circuits to implement spiking dynamics, e.g., analog-to-digital converters (ADCs) or thresholded comparators. As a result, higher-order electrophysical effects are fully exploited to use the state-driven dynamics of memristive neurons at run time. By moving towards non-approximate gradient-based learning, we obtain highly competitive accuracy amongst previously reported lightweight dense fully MSNNs on several benchmarks.
邮件日期 2022年06月28日

485、利用脉冲神经网络中的神经调制突触可塑性学习在线学习

  • Learning to learn online with neuromodulated synaptic plasticity in spiking neural networks 时间:2022年06月25日 第一作者:Samuel Schmidgall 链接.

摘要:我们提出,为了利用我们对神经科学的理解来进行机器学习,我们必须首先拥有强大的工具来训练类似大脑的学习模型。虽然在理解大脑学习动态方面取得了实质性进展,但神经科学衍生的学习模型尚未证明与梯度下降等深度学习方法具有相同的性能。受使用梯度下降的机器学习成功的启发,我们证明了神经科学中的神经调制突触可塑性模型可以在脉冲神经网络(SNN)中训练,其框架是通过梯度下降学习,以解决具有挑战性的在线学习问题。该框架为开发神经科学启发的在线学习算法开辟了一条新途径。

英文摘要 We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, neuroscience-derived models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we demonstrate that models of neuromodulated synaptic plasticity from neuroscience can be trained in Spiking Neural Networks (SNNs) with a framework of learning to learn through gradient descent to address challenging online learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.
邮件日期 2022年06月28日

484、使用剩余脉冲神经网络进行精确特征提取的关键

  • Keys to Accurate Feature Extraction Using Residual Spiking Neural Networks 时间:2022年06月23日 第一作者:Alex Vicente-Sola 链接.
注释 17 pages, 6 figures, 17 tables ACM-class: I.2.6; I.2.10; I.4.8; I.5.2; D.2.13
邮件日期 2022年06月24日

483、基于垂直腔面发射激光器耦合的共振隧道二极管的人工光电脉冲神经元

  • Artificial optoelectronic spiking neuron based on a resonant tunnelling diode coupled to a vertical cavity surface emitting laser 时间:2022年06月22日 第一作者:Mat\v{e}j Hejda 链接.

摘要:可激发光电子器件是在神经形态(脑激励)光子系统中实现人工脉冲神经元的关键构件之一。本文介绍并实验研究了一种光电(O/E/O)人工神经元,该神经元由耦合到光电探测器的谐振隧道二极管(RTD)作为接收器和垂直腔面发射激光器作为发射器构建。我们证明了一个定义良好的兴奋性阈值,在该阈值以上,该神经元产生100 ns的光学脉冲反应,具有典型的神经样不应期。我们利用其扇入功能执行设备内符合检测(逻辑AND)和排他逻辑OR(XOR)任务。这些结果首次对具有输入和输出光学(I/O)终端的基于RTD的脉冲光电神经元中的确定性触发和任务进行了实验验证。此外,我们还从理论上研究了所提出的系统在结合纳米RTD元件和纳米激光器的单片设计中实现纳米光子的前景;因此,证明了基于RTD的集成可激发节点在未来神经形态光子硬件中用于低占地面积、高速光电脉冲神经元的潜力。

英文摘要 Excitable optoelectronic devices represent one of the key building blocks for implementation of artificial spiking neurons in neuromorphic (brain-inspired) photonic systems. This work introduces and experimentally investigates an opto-electro-optical (O/E/O) artificial neuron built with a resonant tunnelling diode (RTD) coupled to a photodetector as a receiver and a vertical cavity surface emitting laser as a the transmitter. We demonstrate a well defined excitability threshold, above which this neuron produces 100 ns optical spiking responses with characteristic neural-like refractory period. We utilise its fan-in capability to perform in-device coincidence detection (logical AND) and exclusive logical OR (XOR) tasks. These results provide first experimental validation of deterministic triggering and tasks in an RTD-based spiking optoelectronic neuron with both input and output optical (I/O) terminals. Furthermore, we also investigate in theory the prospects of the proposed system for its nanophotonic implementation with a monolithic design combining a nanoscale RTD element and a nanolaser; therefore demonstrating the potential of integrated RTD-based excitable nodes for low footprint, high-speed optoelectronic spiking neurons in future neuromorphic photonic hardware.
注释 5 figures
邮件日期 2022年06月23日

482、使用生物学上合理的脉冲潜伏期代码和赢家通吃抑制的有效视觉对象表示

  • Efficient visual object representation using a biologically plausible spike-latency code and winner-take-all inhibition 时间:2022年06月22日 第一作者:Melani Sanchez-Garcia 链接.
邮件日期 2022年06月23日

481、TCJA-SNN:脉冲神经网络的时间通道联合注意

  • TCJA-SNN: Temporal-Channel Joint Attention for Spiking Neural Networks 时间:2022年06月21日 第一作者:Rui-Jie Zhu 链接.

摘要:脉冲神经网络(SNN)是一种通过模拟神经元利用时间信息实现更高效数据深度学习的实用方法。在本文中,我们提出了时间通道联合注意(TCJA)架构单元,这是一种高效的SNN技术,它依赖于注意机制,通过在空间和时间维度上有效地增强脉冲序列的相关性。我们的主要技术贡献在于:1)通过压缩操作将脉冲流压缩为平均矩阵,然后使用两种局部注意机制和有效的一维卷积,以灵活的方式建立用于特征提取的时间和通道关系。2) 利用交叉卷积融合(CCF)层建模时间范围和通道范围之间的相互依赖关系,打破了两个维度的独立性,实现了特征之间的交互。通过联合探索和重新校准数据流,我们的方法在所有测试的主流静态和神经形态数据集(包括Fashion MNIST、CIFAR10-DVS、N-Caltech 101和DVS128手势)上的分类精度最高,超过最先进的(SOTA)15.7%。

英文摘要 Spiking Neural Networks (SNNs) is a practical approach toward more data-efficient deep learning by simulating neurons leverage on temporal information. In this paper, we propose the Temporal-Channel Joint Attention (TCJA) architectural unit, an efficient SNN technique that depends on attention mechanisms, by effectively enforcing the relevance of spike sequence along both spatial and temporal dimensions. Our essential technical contribution lies on: 1) compressing the spike stream into an average matrix by employing the squeeze operation, then using two local attention mechanisms with an efficient 1-D convolution to establish temporal-wise and channel-wise relations for feature extraction in a flexible fashion. 2) utilizing the Cross Convolutional Fusion (CCF) layer for modeling inter-dependencies between temporal and channel scope, which breaks the independence of the two dimensions and realizes the interaction between features. By virtue of jointly exploring and recalibrating data stream, our method outperforms the state-of-the-art (SOTA) by up to 15.7% in terms of top-1 classification accuracy on all tested mainstream static and neuromorphic datasets, including Fashion-MNIST, CIFAR10-DVS, N-Caltech 101, and DVS128 Gesture.
邮件日期 2022年06月22日

480、波动驱动的脉冲神经网络训练初始化

  • Fluctuation-driven initialization for spiking neural network training 时间:2022年06月21日 第一作者:Julian Rossbroich 链接.

摘要:脉冲神经网络(SNN)是大脑中低功耗、容错信息处理的基础,当在合适的神经形态硬件加速器上实现时,可以构成传统深度神经网络的高效替代方案。然而,实例化SNN以解决硅片中的复杂计算任务仍然是一个重大挑战。替代梯度(SG)技术已成为端到端训练SNN的标准解决方案。然而,它们的成功取决于突触权重初始化,类似于传统的人工神经网络(ANN)。然而,与人工神经网络的情况不同,SNN的良好初始状态是由什么组成的仍然是难以捉摸的。在这里,受大脑中常见的波动驱动机制的启发,我们开发了SNN的一般初始化策略。具体来说,我们推导了与数据相关的权重初始化的实用解决方案,以确保广泛使用的泄漏积分和激发(LIF)神经元中的波动驱动激发。我们的经验表明,当使用SGs训练时,按照我们的策略初始化的SNN表现出优越的学习性能。这些发现概括了几个数据集和SNN架构,包括完全连接、深度卷积、周期性和更符合Dale定律的生物学合理SNN。因此,波动驱动初始化提供了一种实用、通用且易于实现的策略,用于改善神经形态工程和计算神经科学中不同任务的SNN训练性能。

英文摘要 Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain and could constitute a power-efficient alternative to conventional deep neural networks when implemented on suitable neuromorphic hardware accelerators. However, instantiating SNNs that solve complex computational tasks in-silico remains a significant challenge. Surrogate gradient (SG) techniques have emerged as a standard solution for training SNNs end-to-end. Still, their success depends on synaptic weight initialization, similar to conventional artificial neural networks (ANNs). Yet, unlike in the case of ANNs, it remains elusive what constitutes a good initial state for an SNN. Here, we develop a general initialization strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain. Specifically, we derive practical solutions for data-dependent weight initialization that ensure fluctuation-driven firing in the widely used leaky integrate-and-fire (LIF) neurons. We empirically show that SNNs initialized following our strategy exhibit superior learning performance when trained with SGs. These findings generalize across several datasets and SNN architectures, including fully connected, deep convolutional, recurrent, and more biologically plausible SNNs obeying Dale's law. Thus fluctuation-driven initialization provides a practical, versatile, and easy-to-implement strategy for improving SNN training performance on diverse tasks in neuromorphic engineering and computational neuroscience.
注释 30 pages, 7 figures, plus supplementary material
邮件日期 2022年06月22日

479、检验脉冲神经网络对非理想忆阻交叉的鲁棒性

  • Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars 时间:2022年06月20日 第一作者:Abhiroop Bhattacharjee 链接.

摘要:由于其异步、稀疏和二进制信息处理,脉冲神经网络(SNN)最近成为人工神经网络(ANN)的低功耗替代方案。为了提高能量效率和吞吐量,SNN可以在忆阻式交叉杆上实现,其中乘法和累加(MAC)操作在模拟域中使用新兴的非易失性存储器(NVM)设备实现。尽管SNN与忆阻交叉码兼容,但很少有人关注内在交叉码的非理想性和随机性对SNN性能的影响。在本文中,我们对SNN在非理想交叉上的鲁棒性进行了全面分析。我们研究了通过学习算法(如代理梯度和ANN-SNN转换)训练的SNN。我们的结果表明,跨多个时间步长的重复交叉计算会导致错误累积,导致SNN推理期间的性能大幅下降。我们进一步表明,当部署在忆阻交叉上时,使用较少时间步长训练的SNN可以获得更好的准确性。

英文摘要 Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing. To improve the energy-efficiency and throughput, SNNs can be implemented on memristive crossbars where Multiply-and-Accumulate (MAC) operations are realized in the analog domain using emerging Non-Volatile-Memory (NVM) devices. Despite the compatibility of SNNs with memristive crossbars, there is little attention to study on the effect of intrinsic crossbar non-idealities and stochasticity on the performance of SNNs. In this paper, we conduct a comprehensive analysis of the robustness of SNNs on non-ideal crossbars. We examine SNNs trained via learning algorithms such as, surrogate gradient and ANN-SNN conversion. Our results show that repetitive crossbar computations across multiple time-steps induce error accumulation, resulting in a huge performance drop during SNN inference. We further show that SNNs trained with a smaller number of time-steps achieve better accuracy when deployed on memristive crossbars.
注释 Accepted in ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED), 2022 DOI: 10.1145/3531437.3539729
邮件日期 2022年06月22日

478、SNN2ANN:一种快速高效的脉冲神经网络训练框架

  • SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking Neural Networks 时间:2022年06月19日 第一作者:Jianxiong Tang 链接.

摘要:脉冲神经网络是低功耗环境下的有效计算模型。基于峰值的BP算法和神经网络到SNN(ANN2SNN)的转换是SNN训练的成功技术。然而,基于峰值的BP训练速度较慢,需要大量内存。尽管ANN2NN提供了一种低成本的SNN训练方法,但它需要许多推理步骤来模拟经过良好训练的ANN以获得良好的性能。在本文中,我们提出了一个SNN-to-ANN(SNN2ANN)框架,以快速高效地训练SNN。SNN2ANN由两部分组成:a)ANN和SNN之间的权重共享架构和b)脉冲映射单元。首先,该架构在ANN分支上训练权重共享参数,从而实现SNN的快速训练和低内存开销。其次,脉冲映射单元确保神经网络的激活值是脉冲特征。因此,可以通过训练神经网络分支来优化SNN的分类误差。此外,我们设计了一种自适应阈值调整(ATA)算法来解决噪声脉冲问题。实验结果表明,基于SNN2ANN的模型在基准数据集(CIFAR10、CIFAR100和Tiny ImageNet)上表现良好。此外,SNN2ANN在基于峰值的BP模型的0.625x时间步长、0.377x训练时间、0.27x GPU内存成本和0.33x峰值活动下可以达到相当的精度。

英文摘要 Spiking neural networks are efficient computation models for low-power environments. Spike-based BP algorithms and ANN-to-SNN (ANN2SNN) conversions are successful techniques for SNN training. Nevertheless, the spike-base BP training is slow and requires large memory costs. Though ANN2NN provides a low-cost way to train SNNs, it requires many inference steps to mimic the well-trained ANN for good performance. In this paper, we propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way. The SNN2ANN consists of 2 components: a) a weight sharing architecture between ANN and SNN and b) spiking mapping units. Firstly, the architecture trains the weight-sharing parameters on the ANN branch, resulting in fast training and low memory costs for SNN. Secondly, the spiking mapping units ensure that the activation values of the ANN are the spiking features. As a result, the classification error of the SNN can be optimized by training the ANN branch. Besides, we design an adaptive threshold adjustment (ATA) algorithm to address the noisy spike problem. Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets (CIFAR10, CIFAR100, and Tiny-ImageNet). Moreover, the SNN2ANN can achieve comparable accuracy under 0.625x time steps, 0.377x training time, 0.27x GPU memory costs, and 0.33x spike activities of the Spike-based BP model.
邮件日期 2022年06月22日

477、tinySNN:迈向记忆和能量高效的脉冲神经网络

  • tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks 时间:2022年06月17日 第一作者:Rachmad Vidya Wicaksana Putra 链接.

摘要:较大的脉冲神经网络(SNN)模型通常是有利的,因为它们可以提供更高的精度。然而,在资源和能源受限的嵌入式平台上使用此类模型效率低下。为此,我们提出了一个tinySNN框架,该框架在训练和推理阶段优化了SNN处理的内存和能量需求,同时保持了较高的准确性。它是通过减少SNN操作、提高学习质量、量化SNN参数和选择适当的SNN模型来实现的。此外,我们的tinySNN对不同的SNN参数(即权重和神经元参数)进行量化,以最大限度地压缩,同时探索量化方案、精度水平和舍入方案的不同组合,以找到提供可接受精度的模型。实验结果表明,与基线网络相比,我们的tinySNN在不损失准确性的情况下显著减少了snn的内存占用和能耗。因此,我们的tinySNN有效地压缩了给定的SNN模型,以节省内存和能源的方式实现高精度,从而使SNN能够用于资源和能源受限的嵌入式应用程序。

英文摘要 Larger Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy. However, employing such models on the resource- and energy-constrained embedded platforms is inefficient. Towards this, we present a tinySNN framework that optimizes the memory and energy requirements of SNN processing in both the training and inference phases, while keeping the accuracy high. It is achieved by reducing the SNN operations, improving the learning quality, quantizing the SNN parameters, and selecting the appropriate SNN model. Furthermore, our tinySNN quantizes different SNN parameters (i.e., weights and neuron parameters) to maximize the compression while exploring different combinations of quantization schemes, precision levels, and rounding schemes to find the model that provides acceptable accuracy. The experimental results demonstrate that our tinySNN significantly reduces the memory footprint and the energy consumption of SNNs without accuracy loss as compared to the baseline network. Therefore, our tinySNN effectively compresses the given SNN model to achieve high accuracy in a memory- and energy-efficient manner, hence enabling the employment of SNNs for the resource- and energy-constrained embedded applications.
注释 9 figures
邮件日期 2022年06月20日

476、基于帧和基于事件的单目标定位的脉冲神经网络

  • Spiking Neural Networks for Frame-based and Event-based Single Object Localization 时间:2022年06月13日 第一作者:Sami Barchid 链接.

摘要:作为人工神经网络的节能替代品,脉冲神经网络显示出了很大的前景。然而,使用常见的神经形态视觉基线(如分类)仍然难以理解传感器噪声和输入编码对网络活动和性能的影响。因此,针对基于帧和事件的传感器,我们提出了一种使用代理梯度下降法训练的单目标定位的脉冲神经网络方法。我们将我们的方法与类似的人工神经网络进行了比较,结果表明,我们的模型在准确性、对各种腐蚀的鲁棒性和较低的能耗方面具有竞争力/更好的性能。此外,我们还研究了神经编码方案对静态图像准确性、鲁棒性和能量效率的影响。我们的观察结果与之前关于生物似真学习规则的研究有很大不同,这有助于设计替代梯度训练架构,并为未来神经形态技术在噪声特性和数据编码方法方面的设计优先级提供了见解。

英文摘要 Spiking neural networks have shown much promise as an energy-efficient alternative to artificial neural networks. However, understanding the impacts of sensor noises and input encodings on the network activity and performance remains difficult with common neuromorphic vision baselines like classification. Therefore, we propose a spiking neural network approach for single object localization trained using surrogate gradient descent, for frame- and event-based sensors. We compare our method with similar artificial neural networks and show that our model has competitive/better performance in accuracy, robustness against various corruptions, and has lower energy consumption. Moreover, we study the impact of neural coding schemes for static images in accuracy, robustness, and energy efficiency. Our observations differ importantly from previous studies on bio-plausible learning rules, which helps in the design of surrogate gradient trained architectures, and offers insight to design priorities in future neuromorphic technologies in terms of noise characteristics and data encoding methods.
注释 21 pages, 12 figures
邮件日期 2022年06月15日

475、神经形态无线认知:用于远程推理的事件驱动语义通信

  • Neuromorphic Wireless Cognition: Event-Driven Semantic Communications for Remote Inference 时间:2022年06月13日 第一作者:Jiechen Chen 链接.

摘要:神经形态计算是一种新兴的计算范式,它从批量处理转向在线、事件驱动的流数据处理。当与基于脉冲的传感器结合时,神经形态芯片可以通过仅在脉冲时间记录相关事件时消耗能量,并通过证明对环境中变化条件的低延迟响应,内在地适应数据分布的“语义”。本文提出了一种端到端的神经形态无线物联网系统设计,该系统集成了基于脉冲的传感、处理和通信。在拟议的神经通信系统中,每个传感设备都配备了一个神经形态传感器、一个脉冲神经网络(SNN)和一个具有多个天线的冲击无线电发射器。传输在共享衰落信道上发生,传输到配备多天线冲击无线电接收器和SNN的接收器。为了使接收器能够适应衰落信道条件,我们引入了一个超网络来使用导频控制解码SNN的权重。导频、编码SNN、解码SNN和超网络在多个信道实现中联合训练。结果表明,与传统的基于帧的数字解决方案以及替代的非自适应训练方法相比,该系统在时间精度和能耗指标方面有显著改进。

英文摘要 Neuromorphic computing is an emerging computing paradigm that moves away from batched processing towards the online, event-driven, processing of streaming data. Neuromorphic chips, when coupled with spike-based sensors, can inherently adapt to the "semantics" of the data distribution by consuming energy only when relevant events are recorded in the timing of spikes and by proving a low-latency response to changing conditions in the environment. This paper proposes an end-to-end design for a neuromorphic wireless Internet-of-Things system that integrates spike-based sensing, processing, and communication. In the proposed NeuroComm system, each sensing device is equipped with a neuromorphic sensor, a spiking neural network (SNN), and an impulse radio transmitter with multiple antennas. Transmission takes place over a shared fading channel to a receiver equipped with a multi-antenna impulse radio receiver and with an SNN. In order to enable adaptation of the receiver to the fading channel conditions, we introduce a hypernetwork to control the weights of the decoding SNN using pilots. Pilots, encoding SNNs, decoding SNN, and hypernetwork are jointly trained across multiple channel realizations. The proposed system is shown to significantly improve over conventional frame-based digital solutions, as well as over alternative non-adaptive training methods, in terms of time-to-accuracy and energy consumption metrics.
注释 submitted
邮件日期 2022年06月14日

474、自动神经网络:走向节能脉冲神经网络

  • AutoSNN: Towards Energy-Efficient Spiking Neural Networks 时间:2022年06月13日 第一作者:Byunggook Na 链接.
注释 Accepted in ICML22
邮件日期 2022年06月14日

473、一种用于脉冲神经网络的突触阈值协同学习方法

  • A Synapse-Threshold Synergistic Learning Approach for Spiking Neural Networks 时间:2022年06月10日 第一作者:Hongze Sun 链接.

摘要:脉冲神经网络(SNN)在各种智能场景中表现出了出色的性能。大多数现有的SNN训练方法都是基于突触可塑性的概念;然而,现实大脑中的学习也利用了神经元固有的非突触机制。生物神经元的脉冲阈值是一个关键的内在神经元特征,在毫秒时间尺度上表现出丰富的动态,被认为是促进神经信息处理的潜在机制。在本研究中,我们开发了一种新的协同学习方法,可以同时训练SNN中的突触权重和棘波阈值。使用突触阈值协同学习(STL SNN)训练的SNN在各种静态和神经形态数据集上的准确率显著高于使用突触学习(SL)和阈值学习(TL)两种单一学习模型训练的SNN。在训练期间,协同学习方法优化神经阈值,通过适当的触发频率为网络提供稳定的信号传输。进一步分析表明,STL SNN对噪声数据具有鲁棒性,并且对于深度网络结构具有低能耗。此外,通过引入广义联合决策框架(JDF),可以进一步提高STL-SNN的性能。总的来说,我们的研究结果表明,突触和内在非突触机制之间的生物学上合理的协同作用可能为开发高效的SNN学习方法提供了一种有希望的方法。

英文摘要 Spiking neural networks (SNNs) have demonstrated excellent capabilities in various intelligent scenarios. Most existing methods for training SNNs are based on the concept of synaptic plasticity; however, learning in the realistic brain also utilizes intrinsic non-synaptic mechanisms of neurons. The spike threshold of biological neurons is a critical intrinsic neuronal feature that exhibits rich dynamics on a millisecond timescale and has been proposed as an underlying mechanism that facilitates neural information processing. In this study, we develop a novel synergistic learning approach that simultaneously trains synaptic weights and spike thresholds in SNNs. SNNs trained with synapse-threshold synergistic learning (STL-SNNs) achieve significantly higher accuracies on various static and neuromorphic datasets than SNNs trained with two single-learning models of the synaptic learning (SL) and the threshold learning (TL). During training, the synergistic learning approach optimizes neural thresholds, providing the network with stable signal transmission via appropriate firing rates. Further analysis indicates that STL-SNNs are robust to noisy data and exhibit low energy consumption for deep network structures. Additionally, the performance of STL-SNN can be further improved by introducing a generalized joint decision framework (JDF). Overall, our findings indicate that biologically plausible synergies between synaptic and intrinsic non-synaptic mechanisms may provide a promising approach for developing highly efficient SNN learning methods.
注释 13 pages, 9 figures, submitted to the IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
邮件日期 2022年06月14日

472、基于稀疏学习脉冲的海马记忆模型的仿生实现

  • A bio-inspired implementation of a sparse-learning spike-based hippocampus memory model 时间:2022年06月10日 第一作者:Daniel Casanueva-Morato 链接.

摘要:神经系统,更具体地说,大脑,能够简单有效地解决复杂问题,远远超过现代计算机。在这方面,神经形态工程是一个研究领域,其重点是模仿控制大脑的基本原理,以开发实现此类计算能力的系统。在这个领域中,生物启发学习和记忆系统仍然是一个有待解决的挑战,而这正是海马体所涉及的领域。它是大脑中起短期记忆作用的区域,允许学习和非结构化快速存储来自大脑皮层所有感觉核的信息及其随后的回忆。在这项工作中,我们提出了一种新的基于海马体的仿生记忆模型,该模型能够学习记忆,从线索(与其余内容相关的记忆的一部分)中回忆记忆,甚至在尝试使用相同线索学习其他人时忘记记忆。该模型已使用脉冲神经网络在SpiNNaker硬件平台上实现,并进行了一系列实验和测试,以证明其正确和预期的操作。提出的基于脉冲的记忆模型仅在接收到输入时才会产生脉冲,这是节能的,学习步骤需要7个时间步,调用先前存储的记忆需要6个时间步。这项工作首次提出了一个全功能的基于生物刺激棘波的海马记忆模型的硬件实现,为未来更复杂的神经形态系统的开发铺平了道路。

英文摘要 The nervous system, more specifically, the brain, is capable of solving complex problems simply and efficiently, far surpassing modern computers. In this regard, neuromorphic engineering is a research field that focuses on mimicking the basic principles that govern the brain in order to develop systems that achieve such computational capabilities. Within this field, bio-inspired learning and memory systems are still a challenge to be solved, and this is where the hippocampus is involved. It is the region of the brain that acts as a short-term memory, allowing the learning and unstructured and rapid storage of information from all the sensory nuclei of the cerebral cortex and its subsequent recall. In this work, we propose a novel bio-inspired memory model based on the hippocampus with the ability to learn memories, recall them from a cue (a part of the memory associated with the rest of the content) and even forget memories when trying to learn others with the same cue. This model has been implemented on the SpiNNaker hardware platform using Spiking Neural Networks, and a set of experiments and tests were performed to demonstrate its correct and expected operation. The proposed spike-based memory model generates spikes only when it receives an input, being energy efficient, and it needs 7 timesteps for the learning step and 6 timesteps for recalling a previously-stored memory. This work presents the first hardware implementation of a fully functional bio-inspired spike-based hippocampus memory model, paving the road for the development of future more complex neuromorphic systems.
注释 15 pages, 7 figures, 3 tables, journal, Neural Networks
邮件日期 2022年06月13日

471、用基于电势的归一化方法解决脉冲深度Q网络中脉冲特征信息消失问题

  • Solving the Spike Feature Information Vanishing Problem in Spiking Deep Q Network with Potential Based Normalization 时间:2022年06月08日 第一作者:Yinqian Sun 链接.

摘要:脑激励脉冲神经网络(SNN)已成功应用于许多模式识别领域。基于SNN的深度结构在图像分类、目标检测等感知任务中取得了显著的成果。然而,深度SNN在强化学习任务中的应用仍然是一个有待探索的问题。虽然之前已有关于SNN和RL结合的研究,但大多数研究集中在具有浅层网络的机器人控制问题或使用ANN-SNN转换方法实现脉冲深度Q网络(SDQN)。在这项工作中,我们从数学上分析了半定量化网络中脉冲信号特征消失的问题,并提出了一种基于电势的层归一化(pbLN)方法来直接训练脉冲深度Q网络。实验表明,与最先进的ANN-SNN转换方法和其他SDQN工作相比,提出的pbLN脉冲深度Q网络(PL-SDQN)在Atari游戏任务中取得了更好的性能。

英文摘要 Brain inspired spiking neural networks (SNNs) have been successfully applied to many pattern recognition domains. The SNNs based deep structure have achieved considerable results in perceptual tasks, such as image classification, target detection. However, the application of deep SNNs in reinforcement learning (RL) tasks is still a problem to be explored. Although there have been previous studies on the combination of SNNs and RL, most of them focus on robotic control problems with shallow networks or using ANN-SNN conversion method to implement spiking deep Q Network (SDQN). In this work, we mathematically analyzed the problem of the disappearance of spiking signal features in SDQN and proposed a potential based layer normalization(pbLN) method to directly train spiking deep Q networks. Experiment shows that compared with state-of-art ANN-SNN conversion method and other SDQN works, the proposed pbLN spiking deep Q networks (PL-SDQN) achieved better performance on Atari game tasks.
邮件日期 2022年06月09日

470、基于SpiNNaker上的脉冲神经网络,使用类神经逻辑门构建基于脉冲的存储器

  • Construction of a spike-based memory using neural-like logic gates based on Spiking Neural Networks on SpiNNaker 时间:2022年06月08日 第一作者:Alvaro Ayuso-Martinez 链接.

摘要:由于神经形态工程作为一个研究领域的巨大潜力,它集中了大量研究人员的努力,以寻求利用生物神经系统和大脑作为一个整体的优势,设计更高效和实时的应用程序。为了开发尽可能接近生物学的应用,人们使用了脉冲神经网络(SNN),认为其在生物学上是合理的,并形成了第三代人工神经网络(ANN)。由于一些基于SNN的应用程序可能需要存储数据以便以后使用,因此需要在数字电路中以及以某种形式在生物学中存在的数据,即脉冲存储器。这项工作介绍了内存的脉冲实现,它是计算机体系结构中最重要的组件之一,在设计全脉冲计算机时可能是必不可少的。在设计这种脉冲存储器的过程中,还实现并测试了不同的中间组件。测试在SpiNNaker神经形态平台上进行,并允许验证用于构建所述块的方法。此外,本文还深入研究了如何使用这种方法构建脉冲块,并将其与其他类似工作中使用的脉冲组件进行了比较,这些类似工作重点关注脉冲组件的设计,包括脉冲逻辑门和脉冲内存。所有实现的块和开发的测试都可以在公共存储库中获得。

英文摘要 Neuromorphic engineering concentrates the efforts of a large number of researchers due to its great potential as a field of research, in a search for the exploitation of the advantages of the biological nervous system and the brain as a whole for the design of more efficient and real-time capable applications. For the development of applications as close to biology as possible, Spiking Neural Networks (SNNs) are used, considered biologically-plausible and that form the third generation of Artificial Neural Networks (ANNs). Since some SNN-based applications may need to store data in order to use it later, something that is present both in digital circuits and, in some form, in biology, a spiking memory is needed. This work presents a spiking implementation of a memory, which is one of the most important components in the computer architecture, and which could be essential in the design of a fully spiking computer. In the process of designing this spiking memory, different intermediate components were also implemented and tested. The tests were carried out on the SpiNNaker neuromorphic platform and allow to validate the approach used for the construction of the presented blocks. In addition, this work studies in depth how to build spiking blocks using this approach and includes a comparison between it and those used in other similar works focused on the design of spiking components, which include both spiking logic gates and spiking memory. All implemented blocks and developed tests are available in a public repository.
注释 15 pages, 9 figures, 8 tables, journal paper, Neural Networks
邮件日期 2022年06月09日

469、脉冲选通流:一种用于在线手势识别的基于层次结构的脉冲神经网络

  • The Spike Gating Flow: A Hierarchical Structure Based Spiking Neural Network for Online Gesture Recognition 时间:2022年06月07日 第一作者:Zihao Zhao 链接.
邮件日期 2022年06月08日

468、SpikiLi:基于激光雷达的自动驾驶实时目标检测的脉冲模拟

  • SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving 时间:2022年06月06日 第一作者:Sambit Mohapatra 链接.

摘要:脉冲神经网络是一种新的神经网络设计方法,有望极大地提高功率效率、计算效率和处理延迟。他们通过使用异步基于脉冲的数据流、基于事件的信号生成、处理和修改神经元模型来实现这一点,使其与生物神经元非常相似。虽然一些初步研究已经显示出对常见深度学习任务适用性的重要初步证据,但它们在复杂现实任务中的应用相对较低。在这项工作中,我们首先说明了脉冲神经网络对复杂深度学习任务的适用性,即基于激光雷达的自动驾驶三维物体检测。其次,我们使用预训练的卷积神经网络逐步演示了模拟脉冲行为。我们在仿真中对脉冲神经网络的关键方面进行了密切建模,并在GPU上实现了等效的运行时间和精度。当该模型在神经形态硬件上实现时,我们希望能够显著提高功率效率。

英文摘要 Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency. They do so by using asynchronous spike-based data flow, event-based signal generation, processing, and modifying the neuron model to resemble biological neurons closely. While some initial works have shown significant initial evidence of applicability to common deep learning tasks, their applications in complex real-world tasks has been relatively low. In this work, we first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving. Secondly, we make a step-by-step demonstration of simulating spiking behavior using a pre-trained convolutional neural network. We closely model essential aspects of spiking neural networks in simulation and achieve equivalent run-time and accuracy on a GPU. When the model is realized on a neuromorphic hardware, we expect to have significantly improved power efficiency.
注释 Accepted at Workshop on Event Sensing and Neuromorphic Engineering - 8th International Conference on Event-based Control, Communication, and Signal Processing
邮件日期 2022年06月08日

467、支持新兴神经编码的资源高效脉冲神经网络加速器

  • A Resource-efficient Spiking Neural Network Accelerator Supporting Emerging Neural Encoding 时间:2022年06月06日 第一作者:Daniel Gerlinghoff 链接.

摘要:脉冲神经网络(SNN)由于其低功耗无乘法计算和与人类神经系统中的生物过程更为相似,最近获得了发展势头。然而,SNN需要很长的脉冲序列(高达1000)才能达到与大型模型的人工神经网络(ANN)类似的精度,这抵消了效率,并阻碍了其在现实世界用例的低功率系统中的应用。为了缓解这个问题,提出了新的神经编码方案来缩短脉冲序列,同时保持高精度。然而,目前的SNN加速器不能很好地支持新兴的编码方案。在这项工作中,我们提出了一种新的硬件架构,可以通过新兴的神经编码有效地支持SNN。我们的实现具有节能和面积效率高的处理单元,提高了并行性,减少了内存访问。我们在FPGA上验证了加速器,在功耗和延迟方面分别比以前的工作提高了25%和90%。同时,高面积效率允许我们扩展大型神经网络模型。据我们所知,这是首次将大型神经网络模型VGG部署在基于FPGA的物理神经形态硬件上。

英文摘要 Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing and the closer resemblance of biological processes in the nervous system of humans. However, SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models, which offsets efficiency and inhibits its application to low-power systems for real-world use cases. To alleviate this problem, emerging neural encoding schemes are proposed to shorten the spike train while maintaining the high accuracy. However, current accelerators for SNN cannot well support the emerging encoding schemes. In this work, we present a novel hardware architecture that can efficiently support SNN with emerging neural encoding. Our implementation features energy and area efficient processing units with increased parallelism and reduced memory accesses. We verified the accelerator on FPGA and achieve 25% and 90% improvement over previous work in power consumption and latency, respectively. At the same time, high area efficiency allows us to scale for large neural network models. To the best of our knowledge, this is the first work to deploy the large neural network model VGG on physical FPGA-based neuromorphic hardware.
邮件日期 2022年06月07日

466、低功率神经形态肌电手势分类

  • Low Power Neuromorphic EMG Gesture Classification 时间:2022年06月04日 第一作者:Sai Sukruth Bezugam 链接.

摘要:基于肌电图信号的手势识别对于智能穿戴设备和生物医学神经修复控制等应用至关重要。由于其固有的脉冲/事件驱动的时空动力学,脉冲神经网络(SNN)在低功耗、实时肌电手势识别方面具有广阔的前景。在文献中,用于肌电手势分类的神经形态硬件实现(全芯片/板/系统规模)的演示有限。此外,大多数文献尝试利用基于LIF(漏积分和激发)神经元的原始SNN。在这项工作中,我们通过以下关键贡献来解决上述差距:(1)使用神经形态递归脉冲神经网络(RSNN)低功耗、高精度演示基于肌电信号的手势识别。特别是,我们提出了一种基于特殊双指数自适应阈值(DEXAT)神经元的多时间尺度递归神经形态系统。我们的网络实现了最先进的分类精度(90%),同时在Roshambo肌电图数据集上使用的神经元比最佳报告的现有技术少约53%。(2) 一种新的多通道脉冲编码器方案,用于在神经形态系统上有效处理实值肌电数据。(3) 本文展示了在英特尔专用神经形态Loihi芯片上实现复杂自适应神经元的独特多室方法。(4) 在Loihi(Nahuku 32)上实现的RSNN在批处理大小为50的GPU上实现了约983X/19X的显著能量/延迟优势。

英文摘要 EMG (Electromyograph) signal based gesture recognition can prove vital for applications such as smart wearables and bio-medical neuro-prosthetic control. Spiking Neural Networks (SNNs) are promising for low-power, real-time EMG gesture recognition, owing to their inherent spike/event driven spatio-temporal dynamics. In literature, there are limited demonstrations of neuromorphic hardware implementation (at full chip/board/system scale) for EMG gesture classification. Moreover, most literature attempts exploit primitive SNNs based on LIF (Leaky Integrate and Fire) neurons. In this work, we address the aforementioned gaps with following key contributions: (1) Low-power, high accuracy demonstration of EMG-signal based gesture recognition using neuromorphic Recurrent Spiking Neural Networks (RSNN). In particular, we propose a multi-time scale recurrent neuromorphic system based on special double-exponential adaptive threshold (DEXAT) neurons. Our network achieves state-of-the-art classification accuracy (90%) while using ~53% lesser neurons than best reported prior art on Roshambo EMG dataset. (2) A new multi-channel spike encoder scheme for efficient processing of real-valued EMG data on neuromorphic systems. (3) Unique multi-compartment methodology to implement complex adaptive neurons on Intel's dedicated neuromorphic Loihi chip is shown. (4) RSNN implementation on Loihi (Nahuku 32) achieves significant energy/latency benefits of ~983X/19X compared to GPU for batch size as 50.
注释 3 Pages, 5 figures, 1 table
邮件日期 2022年06月08日

465、脉冲选通流:一种用于在线手势识别的基于层次结构的脉冲神经网络

  • The Spike Gating Flow: A Hierarchical Structure Based Spiking Neural Network for Online Gesture Recognition 时间:2022年06月04日 第一作者:Zihao Zhao 链接.

摘要:动作识别是人工智能的一个令人兴奋的研究途径,因为它可能在机器人视觉和汽车等新兴工业领域改变游戏规则。然而,由于计算成本巨大和学习效率低下,当前的深度学习在此类应用中面临着重大挑战。因此,我们开发了一种新的基于脑激励的脉冲神经网络(SNN)系统,名为脉冲门控流(SGF),用于在线动作学习。所开发的系统由多个以分层方式组装的SGF单元组成。单个SGF单元包括三层:特征提取层、事件驱动层和基于直方图的训练层。为了演示开发的系统功能,我们采用标准的动态视觉传感器(DVS)手势分类作为基准。结果表明,我们可以达到87.5%的准确率,与深度学习(DL)相当,但在较小的训练/推理数据比1.5:1下。在学习过程中只需要一个训练时段。同时,据我们所知,这是基于非反向传播算法的SNN中精度最高的。最后,我们总结了所开发网络的少镜头学习范式:1)基于层次结构的网络设计涉及人类先验知识;2) SNN用于基于内容的全局动态特征检测。

英文摘要 Action recognition is an exciting research avenue for artificial intelligence since it may be a game changer in the emerging industrial fields such as robotic visions and automobiles. However, current deep learning faces major challenges for such applications because of the huge computational cost and the inefficient learning. Hence, we develop a novel brain-inspired Spiking Neural Network (SNN) based system titled Spiking Gating Flow (SGF) for online action learning. The developed system consists of multiple SGF units which assembled in a hierarchical manner. A single SGF unit involves three layers: a feature extraction layer, an event-driven layer and a histogram-based training layer. To demonstrate the developed system capabilities, we employ a standard Dynamic Vision Sensor (DVS) gesture classification as a benchmark. The results indicate that we can achieve 87.5% accuracy which is comparable with Deep Learning (DL), but at smaller training/inference data number ratio 1.5:1. And only a single training epoch is required during the learning process. Meanwhile, to the best of our knowledge, this is the highest accuracy among the non-backpropagation algorithm based SNNs. At last, we conclude the few-shot learning paradigm of the developed network: 1) a hierarchical structure-based network design involves human prior knowledge; 2) SNNs for content based global dynamic feature detection.
邮件日期 2022年06月07日

464、aSTDP:一种更具生物学合理性的学习

  • aSTDP: A More Biologically Plausible Learning 时间:2022年05月22日 第一作者:Shiyuan Li 链接.

摘要:生物神经网络中的脉冲时间依赖可塑性在生物学习过程中被证明是重要的。另一方面,人工神经网络使用不同的学习方式,如反向传播或对比赫布学习。在这项工作中,我们介绍了近似STDP,一种新的神经网络学习框架,更类似于生物学习过程。它只使用STDP规则进行监督和非监督学习,每个神经元分布学习模式,不需要全局损失或其他监督信息。我们还使用数值方法来近似每个神经元的导数,以便更好地使用SDTP学习,并使用导数为神经元设定目标,以加速训练和测试过程。该框架可以在一个模型中进行预测或生成模式,而无需额外配置。最后,我们在MNIST数据集上验证了我们的分类和生成任务框架。

英文摘要 Spike-timing dependent plasticity in biological neural networks has been proven to be important during biological learning process. On the other hand, artificial neural networks use a different way to learn, such as Back-Propagation or Contrastive Hebbian Learning. In this work we introduce approximate STDP, a new neural networks learning framework more similar to the biological learning process. It uses only STDP rules for supervised and unsupervised learning, every neuron distributed learn patterns and don' t need a global loss or other supervised information. We also use a numerical way to approximate the derivatives of each neuron in order to better use SDTP learning and use the derivatives to set a target for neurons to accelerate training and testing process. The framework can make predictions or generate patterns in one model without additional configuration. Finally, we verified our framework on MNIST dataset for classification and generation tasks.
注释 17 pages, 6 figures. arXiv admin note: text overlap with arXiv:1912.00009
邮件日期 2022年06月29日

463、编程分子系统以模拟学习脉冲神经元

  • Programming molecular systems to emulate a learning spiking neuron 时间:2022年05月09日 第一作者:Jakub Fil 链接.

摘要:赫布理论试图解释大脑中的神经元如何适应刺激,从而实现学习。Hebbian学习的一个有趣特征是,它是一种无监督的方法,因此不需要反馈,适用于系统必须自主学习的环境。本文探讨了如何设计分子系统来显示这种原始智能行为,并提出了第一个化学反应网络(CRN),它可以跨任意多个输入通道显示自主赫布学习。该系统模拟了一个脉冲神经元,我们证明了它可以学习输入的统计偏差。基本CRN是一组最小的、热力学上合理的微观可逆化学方程,可以根据其能量需求进行分析。然而,为了探索这种化学系统如何从头构建,我们还提出了一种基于酶驱动的分区反应的扩展版本。最后,我们还展示了基于DNA链置换范式的纯DNA系统如何实现神经元动力学。我们的分析为探索生物环境中的自主学习提供了一个引人注目的蓝图,使我们更接近于实现真正的合成生物智能。

英文摘要 Hebbian theory seeks to explain how the neurons in the brain adapt to stimuli, to enable learning. An interesting feature of Hebbian learning is that it is an unsupervised method and as such, does not require feedback, making it suitable in contexts where systems have to learn autonomously. This paper explores how molecular systems can be designed to show such proto-intelligent behaviours, and proposes the first chemical reaction network (CRN) that can exhibit autonomous Hebbian learning across arbitrarily many input channels. The system emulates a spiking neuron, and we demonstrate that it can learn statistical biases of incoming inputs. The basic CRN is a minimal, thermodynamically plausible set of micro-reversible chemical equations that can be analysed with respect to their energy requirements. However, to explore how such chemical systems might be engineered de novo, we also propose an extended version based on enzyme-driven compartmentalised reactions. Finally, we also show how a purely DNA system, built upon the paradigm of DNA strand displacement, can realise neuronal dynamics. Our analysis provides a compelling blueprint for exploring autonomous learning in biological settings, bringing us closer to realising real synthetic biological intelligence.
注释 Submitted to ACS Synthetic Biology. arXiv admin note: substantial text overlap with arXiv:2009.13207
邮件日期 2022年06月07日

462、IM/DD光通信中的脉冲神经网络均衡

  • Spiking Neural Network Equalization for IM/DD Optical Communication 时间:2022年06月01日 第一作者:Elias Arnold 链接.
邮件日期 2022年06月02日

461、基于累加器神经元的词典学习

  • Dictionary Learning with Accumulator Neurons 时间:2022年05月30日 第一作者:Gavin Parpart 链接.

摘要:局部竞争算法(LCA)使用非脉冲泄漏积分器神经元之间的局部竞争来推断稀疏表示,允许在大规模并行神经形态架构(如Intel的Loihi处理器)上实时执行。在这里,我们关注的是使用时空特征字典从流视频中推断稀疏表示的问题,该字典以无监督的方式优化稀疏重建。无脉冲LCA以前被用于实现由原始未标记视频中的卷积核组成的时空词典的无监督学习。我们演示了如何使用累加器神经元有效地实现带脉冲LCA(\hbox{S-LCA})的无监督字典学习,累加器神经元将传统的漏积分和激发(\hbox{LIF})脉冲发生器与用于最小化积分输入和脉冲输出之间差异的附加状态变量相结合。我们展示了从分级到间歇脉冲的一系列动态模式的字典学习,用于推断从CIFAR数据库提取的静态图像以及从DVS相机捕获的视频帧的稀疏表示。在一项分类任务中,需要从DVS摄像头快速翻阅的一副卡片中识别套件,我们发现,由于用于推断稀疏时空表示的LCA模型从渐变到脉冲,性能基本上没有下降。我们的结论是,累加器神经元可能为未来的神经形态硬件提供一个强大的使能组件,用于实现时空词典的在线无监督学习,该词典针对基于事件的DVS摄像机的流视频稀疏重建进行了优化。

英文摘要 The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor. Here, we focus on the problem of inferring sparse representations from streaming video using dictionaries of spatiotemporal features optimized in an unsupervised manner for sparse reconstruction. Non-spiking LCA has previously been used to achieve unsupervised learning of spatiotemporal dictionaries composed of convolutional kernels from raw, unlabeled video. We demonstrate how unsupervised dictionary learning with spiking LCA (\hbox{S-LCA}) can be efficiently implemented using accumulator neurons, which combine a conventional leaky-integrate-and-fire (\hbox{LIF}) spike generator with an additional state variable that is used to minimize the difference between the integrated input and the spiking output. We demonstrate dictionary learning across a wide range of dynamical regimes, from graded to intermittent spiking, for inferring sparse representations of both static images drawn from the CIFAR database as well as video frames captured from a DVS camera. On a classification task that requires identification of the suite from a deck of cards being rapidly flipped through as viewed by a DVS camera, we find essentially no degradation in performance as the LCA model used to infer sparse spatiotemporal representations migrates from graded to spiking. We conclude that accumulator neurons are likely to provide a powerful enabling component of future neuromorphic hardware for implementing online unsupervised learning of spatiotemporal dictionaries optimized for sparse reconstruction of streaming video from event based DVS cameras.
邮件日期 2022年06月01日

460、盲文字母阅读:神经形态硬件时空模式识别的基准

  • Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern Recognition on Neuromorphic Hardware 时间:2022年05月30日 第一作者:Simon F Muller-Cleve 链接.

摘要:时空模式识别是大脑的一种基本能力,在许多实际应用中都需要这种能力。最近的深度学习方法在这类任务中已经达到了非常高的精度,但它们在传统嵌入式解决方案上的实现在计算和能源方面仍然非常昂贵。机器人应用中的触觉传感是一个需要实时处理和能效的典型例子。根据大脑启发的计算方法,我们提出了一种通过盲文字母阅读在边缘进行时空触觉模式识别的新基准。我们基于iCub机器人的电容触觉传感器/指尖记录了一个新的盲文字母数据集,然后研究了时间信息的重要性以及基于事件编码对基于脉冲/基于事件计算的影响。之后,我们使用带代理梯度的时间反向传播离线训练和比较前馈和递归脉冲神经网络(SNN),然后将其部署在Intel Loihi神经形态芯片上,以实现快速高效的推理。在分类精度、功耗/能耗和计算延迟方面,我们面对的是标准分类器,尤其是部署在嵌入式Nvidia Jetson GPU上的长-短期内存(LSTM)。我们的结果表明,LSTM在准确性方面优于递归SNN 14%。然而,Loihi上的循环SNN比Jetson上的LSTM能效高237倍,平均功率仅为31mW。这项工作为触觉感知提出了一个新的基准,并强调了基于事件编码、神经形态硬件和基于脉冲的边缘时空模式识别计算的挑战和机遇。

英文摘要 Spatio-temporal pattern recognition is a fundamental ability of the brain which is required for numerous real-world applications. Recent deep learning approaches have reached outstanding accuracy in such tasks, but their implementation on conventional embedded solutions is still very computationally and energy expensive. Tactile sensing in robotic applications is a representative example where real-time processing and energy-efficiency are required. Following a brain-inspired computing approach, we propose a new benchmark for spatio-temporal tactile pattern recognition at the edge through braille letters reading. We recorded a new braille letters dataset based on the capacitive tactile sensors/fingertip of the iCub robot, then we investigated the importance of temporal information and the impact of event-based encoding for spike-based/event-based computation. Afterwards, we trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihi neuromorphic chip for fast and efficient inference. We confronted our approach to standard classifiers, in particular to a Long Short-Term Memory (LSTM) deployed on the embedded Nvidia Jetson GPU in terms of classification accuracy, power/energy consumption and computational delay. Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy-efficient than the LSTM on Jetson, requiring an average power of only 31mW. This work proposes a new benchmark for tactile sensing and highlights the challenges and opportunities of event-based encoding, neuromorphic hardware and spike-based computing for spatio-temporal pattern recognition at the edge.
注释 20 pages, submitted to Frontiers in Neuroscience - Neuromorphic Engineering
邮件日期 2022年06月01日

459、加速脉冲神经网络训练

  • Accelerating spiking neural network training 时间:2022年05月30日 第一作者:Luke Taylor 链接.

摘要:脉冲神经网络(SNN)是一种人工网络,其灵感来源于大脑中动作电位的使用。在神经形态计算机上模拟这些网络的兴趣越来越大,因为它们的能耗和速度都有所提高,这是其对应的人工神经网络(ANN)的主要缩放问题。在直接培训SNN以在准确性方面与ANN媲美方面取得了重大进展。然而,这些方法由于其顺序性而速度较慢,导致训练时间较长。我们提出了一种新的直接训练单棘波神经元SNN的技术,它消除了所有的顺序计算,并且完全依赖于矢量化操作。我们在中低时空复杂度的真实数据集(时尚MNIST和神经性MNIST)上演示了在训练中超过10美元的加速,并具有强大的分类性能。我们提出的解决方案能够解决某些任务,与传统训练的SNN相比,脉冲计数减少了95.68%,这可以显著降低部署在神经形态计算机上的能量需求。

英文摘要 Spiking neural networks (SNN) are a type of artificial network inspired by the use of action potentials in the brain. There is a growing interest in emulating these networks on neuromorphic computers due to their improved energy consumption and speed, which are the main scaling issues of their counterpart the artificial neural network (ANN). Significant progress has been made in directly training SNNs to perform on par with ANNs in terms of accuracy. These methods are however slow due to their sequential nature, leading to long training times. We propose a new technique for directly training single-spike-per-neuron SNNs which eliminates all sequential computation and relies exclusively on vectorised operations. We demonstrate over a $\times 10$ speedup in training with robust classification performance on real datasets of low to medium spatio-temporal complexity (Fashion-MNIST and Neuromophic-MNIST). Our proposed solution manages to solve certain tasks with over a $95.68 \%$ reduction in spike counts relative to a conventionally trained SNN, which could significantly reduce energy requirements when deployed on neuromorphic computers.
注释 18 pages, 5 figures, under review at NeurIPS 2022
邮件日期 2022年05月31日

458、基于TNN的神经形态感觉处理单元的设计框架

  • Towards a Design Framework for TNN-Based Neuromorphic Sensory Processing Units 时间:2022年05月27日 第一作者:Prabhu Vellaisamy 链接.

摘要:时间神经网络(TNN)是一种具有高能量效率的神经网络,具有类似大脑的感觉处理能力。这项工作介绍了正在进行的研究,旨在开发一个定制设计框架,用于设计高效的基于特定应用TNN的神经形态感觉处理单元(NSPU)。本文研究了先前针对UCR时间序列聚类和MNIST图像分类应用程序的NSPU设计工作。描述了定制设计框架和工具的当前想法,这些框架和工具能够实现高效的软硬件设计流,以快速探索特定于应用程序的NSPU的设计空间,同时利用EDA工具获取布局后网络表和电源性能区域(PPA)指标。展望了未来的研究方向。

英文摘要 Temporal Neural Networks (TNNs) are spiking neural networks that exhibit brain-like sensory processing with high energy efficiency. This work presents the ongoing research towards developing a custom design framework for designing efficient application-specific TNN-based Neuromorphic Sensory Processing Units (NSPUs). This paper examines previous works on NSPU designs for UCR time-series clustering and MNIST image classification applications. Current ideas for a custom design framework and tools that enable efficient software-to-hardware design flow for rapid design space exploration of application-specific NSPUs while leveraging EDA tools to obtain post-layout netlist and power-performance-area (PPA) metrics are described. Future research directions are also outlined.
邮件日期 2022年05月31日

457、基于推土机距离的暹罗脉冲神经网络监督训练

  • Supervised Training of Siamese Spiking Neural Networks with Earth Mover's Distance 时间:2022年05月27日 第一作者:Mateusz Pabian 链接.
注释 Revised paper accepted for presentation at 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) DOI: 10.1109/ICASSP43922.2022.9746630
邮件日期 2022年05月30日

456、基于全力训练的反馈驱动递归脉冲神经网络学习

  • Learning in Feedback-driven Recurrent Spiking Neural Networks using full-FORCE Training 时间:2022年05月26日 第一作者:Ankita Paul 链接.

摘要:反馈驱动的递归脉冲神经网络(RSNN)是一种能够模拟动态系统的强大计算模型。然而,从读出到重现层的反馈回路的存在降低了学习机制的稳定性并防止其收敛。在这里,我们提出了一种RSNNs的监督训练过程,其中第二个网络仅在训练期间引入,以提供目标动态的提示。拟议的培训程序包括为重现层和读出层生成目标(即,对于完整的RSNN系统)。它使用基于递归最小二乘法的一阶和减少控制误差(FORCE)算法,使每一层的活动与其目标相适应。建议的全员训练程序减少了使输出和目标之间的误差接近于零所需的修改量。这些修改控制反馈回路,从而使训练收敛。我们使用带有泄漏积分和激发(LIF)神经元和速率编码的RSNN对8个动态系统进行建模,证明了所提出的全力以赴训练方法的改进性能和噪声鲁棒性。为了实现高效节能的硬件实现,在全员训练过程中实现了一种替代的首次脉冲时间(TTFS)编码。与速率编码相比,TTFS编码的全力产生更少的脉冲,并有助于更快地收敛到目标动态。

英文摘要 Feedback-driven recurrent spiking neural networks (RSNNs) are powerful computational models that can mimic dynamical systems. However, the presence of a feedback loop from the readout to the recurrent layer de-stabilizes the learning mechanism and prevents it from converging. Here, we propose a supervised training procedure for RSNNs, where a second network is introduced only during the training, to provide hint for the target dynamics. The proposed training procedure consists of generating targets for both recurrent and readout layers (i.e., for a full RSNN system). It uses the recursive least square-based First-Order and Reduced Control Error (FORCE) algorithm to fit the activity of each layer to its target. The proposed full-FORCE training procedure reduces the amount of modifications needed to keep the error between the output and target close to zero. These modifications control the feedback loop, which causes the training to converge. We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems using RSNNs with leaky integrate and fire (LIF) neurons and rate coding. For energy-efficient hardware implementation, an alternative time-to-first-spike (TTFS) coding is implemented for the full- FORCE training procedure. Compared to rate coding, full-FORCE with TTFS coding generates fewer spikes and facilitates faster convergence to the target dynamics.
注释 Accepted at IJCNN 2022
邮件日期 2022年05月30日

455、无监督STDP训练的二维与三维卷积脉冲神经网络用于人体动作识别

  • 2D versus 3D Convolutional Spiking Neural Networks Trained with Unsupervised STDP for Human Action Recognition 时间:2022年05月26日 第一作者:Mireille El-Assal 链接.

摘要:当前技术的进步突出了视频分析在计算机视觉领域的重要性。然而,视频分析与传统的人工神经网络(ANN)相比,具有相当高的计算成本。脉冲神经网络(SNN)是第三代生物学上合理的模型,以脉冲的形式处理信息。使用脉冲时间依赖可塑性(STDP)规则的SNN无监督学习有可能克服常规人工神经网络的一些瓶颈,但基于STDP的SNN仍不成熟,其性能远远落后于ANN。在这项工作中,我们研究了SNN在接受人类行为识别任务挑战时的性能,因为这项任务在计算机视觉中有许多实时应用,例如视频监控。本文介绍了一种用无监督STDP训练的多层三维卷积SNN模型。当使用KTH和魏茨曼数据集进行挑战时,我们将该模型的性能与基于2D STDP的SNN的性能进行了比较。我们还比较了这些模型的单层和多层版本,以便对其性能进行准确评估。我们表明,基于STDP的卷积snn可以使用3D核学习运动模式,从而实现基于运动的视频识别。最后,我们证明了基于STDP的snn的3D卷积优于2D卷积,尤其是在处理长视频序列时。

英文摘要 Current advances in technology have highlighted the importance of video analysis in the domain of computer vision. However, video analysis has considerably high computational costs with traditional artificial neural networks (ANNs). Spiking neural networks (SNNs) are third generation biologically plausible models that process the information in the form of spikes. Unsupervised learning with SNNs using the spike timing dependent plasticity (STDP) rule has the potential to overcome some bottlenecks of regular artificial neural networks, but STDP-based SNNs are still immature and their performance is far behind that of ANNs. In this work, we study the performance of SNNs when challenged with the task of human action recognition, because this task has many real-time applications in computer vision, such as video surveillance. In this paper we introduce a multi-layered 3D convolutional SNN model trained with unsupervised STDP. We compare the performance of this model to those of a 2D STDP-based SNN when challenged with the KTH and Weizmann datasets. We also compare single-layer and multi-layer versions of these models in order to get an accurate assessment of their performance. We show that STDP-based convolutional SNNs can learn motion patterns using 3D kernels, thus enabling motion-based recognition from videos. Finally, we give evidence that 3D convolution is superior to 2D convolution with STDP-based SNNs, especially when dealing with long video sequences.
注释 arXiv admin note: text overlap with arXiv:2105.14740 by other authors
邮件日期 2022年05月27日

454、一点能量可以走很长的路:从卷积神经网络构建一个高效、准确的脉冲神经网络

  • A Little Energy Goes a Long Way: Build an Energy-Efficient, Accurate Spiking Neural Network from Convolutional Neural Network 时间:2022年05月26日 第一作者:Dengyu Wu 链接.
邮件日期 2022年05月27日

453、Spiker:一种用于脉冲神经网络的FPGA优化硬件加速

  • Spiker: an FPGA-optimized Hardware acceleration for Spiking Neural Networks 时间:2022年05月26日 第一作者:Alessio Carpegna 链接.
注释 6 pages, 3 figures, 4 tables
邮件日期 2022年05月27日

452、神经符号大脑

  • The Neuro-Symbolic Brain 时间:2022年05月13日 第一作者:Robert Liz'ee 链接.

摘要:神经网络促进了符号没有明确位置的分布式表示。尽管如此,我们提出,符号的制造只是通过在反馈脉冲神经网络中将稀疏随机噪声训练为自持吸引子。这样,我们就可以生成许多我们称之为素吸引子的东西,而支持它们的网络就像持有符号值的寄存器,我们称之为寄存器。与符号一样,素吸引子是原子的,没有任何内部结构。此外,由脉冲神经元自然实现的赢家通吃机制使寄存器能够恢复噪声信号中的主吸引子。利用这种能力,当考虑两个相连的寄存器(输入寄存器和输出寄存器)时,可以使用Hebbian规则将输出上的吸引子绑定到输入上的吸引子。因此,每当吸引子在输入端处于活动状态时,它就会在输出端诱导其约束吸引子;即使绑定越多,信号越模糊,赢家通吃过滤功能也可以恢复绑定的素吸引子。然而,容量仍然有限。也可以一次性解除绑定,恢复该绑定所占用的容量。这种机制作为工作记忆的基础,将素数吸引子转化为变量。此外,我们还使用一个随机二阶网络合并两个寄存器所持有的素吸引子,以一次性将第三个寄存器所持有的素吸引子绑定到它们,实际上实现了一个哈希表。此外,我们还介绍了由寄存器组成的寄存器开关盒,用于将一个寄存器的内容移动到另一个寄存器。在此基础上,我们利用脉冲神经元构建了一个玩具符号计算机。所使用的技术提出了以结构先验为代价设计外推、可重用、样本有效的深度学习网络的方法。

英文摘要 Neural networks promote a distributed representation with no clear place for symbols. Despite this, we propose that symbols are manufactured simply by training a sparse random noise as a self-sustaining attractor in a feedback spiking neural network. This way, we can generate many of what we shall call prime attractors, and the networks that support them are like registers holding a symbolic value, and we call them registers. Like symbols, prime attractors are atomic and devoid of any internal structure. Moreover, the winner-take-all mechanism naturally implemented by spiking neurons enables registers to recover a prime attractor within a noisy signal. Using this faculty, when considering two connected registers, an input one and an output one, it is possible to bind in one shot using a Hebbian rule the attractor active on the output to the attractor active on the input. Thus, whenever an attractor is active on the input, it induces its bound attractor on the output; even though the signal gets blurrier with more bindings, the winner-take-all filtering faculty can recover the bound prime attractor. However, the capacity is still limited. It is also possible to unbind in one shot, restoring the capacity taken by that binding. This mechanism serves as a basis for working memory, turning prime attractors into variables. Also, we use a random second-order network to amalgamate the prime attractors held by two registers to bind the prime attractor held by a third register to them in one shot, de facto implementing a hash table. Furthermore, we introduce the register switch box composed of registers to move the content of one register to another. Then, we use spiking neurons to build a toy symbolic computer based on the above. The technics used suggest ways to design extrapolating, reusable, sample-efficient deep learning networks at the cost of structural priors.
注释 32 pages, 11 figures ACM-class: I.2.0; I.2.6
邮件日期 2022年05月27日

451、lpSpikeCon:支持低精度脉冲神经网络处理,实现对自治代理的高效无监督连续学习

  • lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for Efficient Unsupervised Continual Learning on Autonomous Agents 时间:2022年05月24日 第一作者:Rachmad Vidya Wicaksana Putra 链接.

摘要:最近的研究表明,基于SNN的系统可以有效地执行无监督的连续学习,这是由于其生物似有理的学习规则,例如脉冲时间依赖性可塑性(STDP)。这种学习能力对于需要不断适应动态变化的场景/环境的自治代理(如机器人和无人机)等用例尤其有益,其中直接从环境中收集的新数据可能具有应在线学习的新特性。当前最先进的工作在训练和推理阶段都采用了高精度权重(即32位),这造成了较高的内存和能源成本,从而阻碍了电池驱动移动自治系统中此类系统的高效嵌入式实现。另一方面,由于信息丢失,精度降低可能会危及无监督连续学习的质量。为此,我们提出了lpSpikeCon,这是一种新的方法,可以实现低精度SNN处理,从而在资源受限的自治代理/系统上实现高效的无监督连续学习。我们的lpSpikeCon方法采用了以下关键步骤:(1)分析在无监督的连续学习环境下,在降低权重精度的情况下训练SNN模型对推理精度的影响;(2) 利用本研究确定对推理准确性有重大影响的SNN参数;(3)开发一种算法,用于搜索相应的SNN参数值,以提高无监督连续学习的质量。实验结果表明,我们的lpSpikeCon可以将SNN模型的权重记忆减少8倍(即,通过明智地使用4位权重),以便在无监督连续学习的情况下进行在线训练,并且与在不同网络大小上使用32位权重的基线模型相比,在推理阶段不会出现精度损失。

英文摘要 Recent advances have shown that SNN-based systems can efficiently perform unsupervised continual learning due to their bio-plausible learning rule, e.g., Spike-Timing-Dependent Plasticity (STDP). Such learning capabilities are especially beneficial for use cases like autonomous agents (e.g., robots and UAVs) that need to continuously adapt to dynamically changing scenarios/environments, where new data gathered directly from the environment may have novel features that should be learned online. Current state-of-the-art works employ high-precision weights (i.e., 32 bit) for both training and inference phases, which pose high memory and energy costs thereby hindering efficient embedded implementations of such systems for battery-driven mobile autonomous systems. On the other hand, precision reduction may jeopardize the quality of unsupervised continual learning due to information loss. Towards this, we propose lpSpikeCon, a novel methodology to enable low-precision SNN processing for efficient unsupervised continual learning on resource-constrained autonomous agents/systems. Our lpSpikeCon methodology employs the following key steps: (1) analyzing the impacts of training the SNN model under unsupervised continual learning settings with reduced weight precision on the inference accuracy; (2) leveraging this study to identify SNN parameters that have a significant impact on the inference accuracy; and (3) developing an algorithm for searching the respective SNN parameter values that improve the quality of unsupervised continual learning. The experimental results show that our lpSpikeCon can reduce weight memory of the SNN model by 8x (i.e., by judiciously employing 4-bit weights) for performing online training with unsupervised continual learning and achieve no accuracy loss in the inference phase, as compared to the baseline model with 32-bit weights across different network sizes.
注释 To appear at the 2022 International Joint Conference on Neural Networks (IJCNN), the 2022 IEEE World Congress on Computational Intelligence (WCCI), July 2022, Padova, Italy
邮件日期 2022年05月26日

450、DPSNN:一种差分私有脉冲神经网络

  • DPSNN: A Differentially Private Spiking Neural Network 时间:2022年05月24日 第一作者:Jihang Wang 链接.

摘要:隐私保护是机器学习算法的一个关键问题。脉冲神经网络(SNN)在图像分类、目标检测、语音识别等领域发挥着重要作用,但对SNN隐私保护的研究却十分迫切。本研究将差分隐私(DP)算法与SNN相结合,提出了差分私有脉冲神经网络(DPSNN)。DP向梯度中注入噪声,SNN以离散的脉冲序列传输信息,因此我们的差异私有SNN可以在确保高精度的同时保持强大的隐私保护。我们在MNIST、Fashion MNIST和人脸识别数据集Extended YaleB上进行了实验。当隐私保护得到改善时,人工神经网络(ANN)的准确性显著下降,但我们的算法在性能上几乎没有变化。同时,分析了影响SNN隐私保护的不同因素。首先,代理梯度越不精确,SNN的隐私保护越好。其次,整合与激发(IF)神经元的表现优于泄漏整合与激发(LIF)神经元。第三,较大的时间窗口更有助于隐私保护和性能。

英文摘要 Privacy-preserving is a key problem for the machine learning algorithm. Spiking neural network (SNN) plays an important role in many domains, such as image classification, object detection, and speech recognition, but the study on the privacy protection of SNN is urgently needed. This study combines the differential privacy (DP) algorithm and SNN and proposes differentially private spiking neural network (DPSNN). DP injects noise into the gradient, and SNN transmits information in discrete spike trains so that our differentially private SNN can maintain strong privacy protection while still ensuring high accuracy. We conducted experiments on MNIST, Fashion-MNIST, and the face recognition dataset Extended YaleB. When the privacy protection is improved, the accuracy of the artificial neural network(ANN) drops significantly, but our algorithm shows little change in performance. Meanwhile, we analyzed different factors that affect the privacy protection of SNN. Firstly, the less precise the surrogate gradient is, the better the privacy protection of the SNN. Secondly, the Integrate-And-Fire (IF) neurons perform better than leaky Integrate-And-Fire (LIF) neurons. Thirdly, a large time window contributes more to privacy protection and performance.
注释 12 pages, 6 figures
邮件日期 2022年05月26日

449、一种自适应的脉冲分类对比学习模型

  • An Adaptive Contrastive Learning Model for Spike Sorting 时间:2022年05月24日 第一作者:Lang Qian 链接.

摘要:脑-机接口(BCI)是电子设备与大脑直接通信的方式。对于大多数医学类型的脑-机接口任务,多个神经元单元的活动或局部场电位足以进行解码。但对于用于神经科学研究的BCI来说,分离出单个神经元的活动是很重要的。随着大规模硅技术的发展和探针通道数量的增加,人工解释和标记脉冲变得越来越不切实际。本文提出了一种新的建模框架:自适应对比学习模型,该模型以最大互信息损失函数为理论基础,通过对比学习从脉冲中学习表征。基于具有相似特征的数据共享相同标签这一事实,无论是多重分类还是二进制分类。在这一理论支持下,我们将多分类问题简化为多个二进制分类,提高了准确性和运行效率。此外,我们还对脉冲进行了一系列增强,同时解决了脉冲重叠影响分类效果的问题。

英文摘要 Brain-computer interfaces (BCIs), is ways for electronic devices to communicate directly with the brain. For most medical-type brain-computer interface tasks, the activity of multiple units of neurons or local field potentials is sufficient for decoding. But for BCIs used in neuroscience research, it is important to separate out the activity of individual neurons. With the development of large-scale silicon technology and the increasing number of probe channels, artificially interpreting and labeling spikes is becoming increasingly impractical. In this paper, we propose a novel modeling framework: Adaptive Contrastive Learning Model that learns representations from spikes through contrastive learning based on the maximizing mutual information loss function as a theoretical basis. Based on the fact that data with similar features share the same labels whether they are multi-classified or binary-classified. With this theoretical support, we simplify the multi-classification problem into multiple binary-classification, improving both the accuracy and the runtime efficiency. Moreover, we also introduce a series of enhancements for the spikes, while solving the problem that the classification effect is affected because of the overlapping spikes.
邮件日期 2022年05月25日

448、基于Hebbian可塑性的脉冲神经网络的记忆丰富计算与学习

  • Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity 时间:2022年05月23日 第一作者:Thomas Limbacher 链接.

摘要:记忆是生物神经系统的一个关键组成部分,它能够在从数百毫秒到数年的巨大时间尺度上保留信息。虽然Hebbian可塑性被认为在生物记忆中起着关键作用,但迄今为止,它主要是在模式完成和无监督学习的背景下进行分析的。在这里,我们提出Hebbian可塑性是生物神经系统计算的基础。我们介绍了一种新的脉冲神经网络结构,该结构丰富了Hebbian突触可塑性。我们表明,Hebbian丰富使脉冲神经网络在计算和学习能力方面具有惊人的通用性。它提高了他们的非分布泛化、一次性学习、跨模态生成关联、语言处理和基于奖励的学习能力。由于脉冲神经网络是高效节能的神经形态硬件的基础,这也表明可以基于此原理构建强大的认知神经形态系统。

英文摘要 Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network architecture that is enriched by Hebbian synaptic plasticity. We show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. It improves their abilities for out-of-distribution generalization, one-shot learning, cross-modal generative association, language processing, and reward-based learning. As spiking neural networks are the basis for energy-efficient neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.
邮件日期 2022年05月24日

447、PrivateSNN:保护隐私的脉冲神经网络

  • PrivateSNN: Privacy-Preserving Spiking Neural Networks 时间:2022年05月21日 第一作者:Youngeun Kim 链接.
注释 Accepted to AAAI2022
邮件日期 2022年05月24日

446、脉冲神经网络的最新进展和新前沿

  • Recent Advances and New Frontiers in Spiking Neural Networks 时间:2022年05月21日 第一作者:Duzhen Zhang 链接.
注释 Accepted at IJCAI2022
邮件日期 2022年05月24日

445、使用生物学上合理的脉冲潜伏期代码和赢家通吃抑制有效地表示视觉对象

  • Efficient visual object representation using a biologically plausible spike-latency code and winner-take-all inhibition 时间:2022年05月20日 第一作者:Melani Sanchez-Garcia 链接.

摘要:深度神经网络在诸如物体识别等关键视觉挑战方面已经超过了人类的表现,但需要大量的能量、计算和记忆。相比之下,脉冲神经网络(SNN)有潜力提高目标识别系统的效率和生物合理性。在这里,我们提出了一个SNN模型,该模型使用脉冲潜伏期编码和赢家通吃抑制(WTA-I)有效地表示来自时尚MNIST数据集的视觉刺激。刺激用**周围的感受野进行预处理,然后馈送给一层棘突神经元,其突触重量通过棘突时间依赖性可塑性(STDP)进行更新。我们研究了在不同的WTA-I方案下被表示对象的质量如何变化,并证明了由150个脉冲神经元组成的网络可以有效地表示只有40个脉冲的对象。研究如何在SNN中使用生物学上合理的学习规则来实现核心对象识别,不仅可以加深我们对大脑的理解,还可以开发出新颖高效的人工视觉系统。

英文摘要 Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli from the Fashion MNIST dataset. Stimuli were preprocessed with center-surround receptive fields and then fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity (STDP). We investigate how the quality of the represented objects changes under different WTA-I schemes and demonstrate that a network of 150 spiking neurons can efficiently represent objects with as little as 40 spikes. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.
邮件日期 2022年05月23日

444、出埃及记:稳定有效的脉冲神经网络训练

  • EXODUS: Stable and Efficient Training of Spiking Neural Networks 时间:2022年05月20日 第一作者:Felix Christian Bauer (1) 链接.

摘要:在能量效率至关重要的机器学习任务中,脉冲神经网络(SNN)正在获得巨大的吸引力。然而,使用最先进的时间反向传播(BPTT)来训练此类网络非常耗时。Shrestha和Orchard(2018)之前的工作采用了一种高效的GPU加速反向传播算法,称为SLAYER,该算法大大加快了训练速度。然而,SLAYER在计算梯度时没有考虑神经元重置机制,我们认为梯度是数值不稳定性的来源。为了解决这个问题,SLAYER引入了跨层的渐变比例超参数,这需要手动调整。在本文中,(i)我们修改了SLAYER并设计了一种称为EXODUS的算法,该算法考虑了神经元重置机制,并应用隐式函数定理(IFT)计算正确的梯度(相当于BPTT计算的梯度),(ii)我们消除了对梯度进行特殊缩放的需要,从而大大降低了训练复杂性,(iii)我们证明,通过计算机模拟,出埃及记在数值上是稳定的,并且取得了与SLAYER相当或更好的性能,尤其是在使用依赖于时间特征的SNN的各种任务中。我们的代码位于https://github.com/synsense/sinabs-exodus.

英文摘要 Spiking Neural Networks (SNNs) are gaining significant traction in machine learning tasks where energy-efficiency is of utmost importance. Training such networks using the state-of-the-art back-propagation through time (BPTT) is, however, very time-consuming. Previous work by Shrestha and Orchard [2018] employs an efficient GPU-accelerated back-propagation algorithm called SLAYER, which speeds up training considerably. SLAYER, however, does not take into account the neuron reset mechanism while computing the gradients, which we argue to be the source of numerical instability. To counteract this, SLAYER introduces a gradient scale hyperparameter across layers, which needs manual tuning. In this paper, (i) we modify SLAYER and design an algorithm called EXODUS, that accounts for the neuron reset mechanism and applies the Implicit Function Theorem (IFT) to calculate the correct gradients (equivalent to those computed by BPTT), (ii) we eliminate the need for ad-hoc scaling of gradients, thus, reducing the training complexity tremendously, (iii) we demonstrate, via computer simulations, that EXODUS is numerically stable and achieves a comparable or better performance than SLAYER especially in various tasks with SNNs that rely on temporal features. Our code is available at https://github.com/synsense/sinabs-exodus.
邮件日期 2022年05月23日

443、Spikemax:基于峰值的分类损失方法

  • Spikemax: Spike-based Loss Methods for Classification 时间:2022年05月19日 第一作者:Sumit Bam Shrestha 链接.

摘要:脉冲神经网络是低功耗边缘计算的一种很有前景的研究范式。最近在SNN反向传播方面的工作使SNN能够针对实际任务进行培训。然而,由于峰值是时间上的二元事件,标准损失公式与峰值输出不直接兼容。因此,目前的工作仅限于使用峰值计数的均方损失。在本文中,我们从脉冲计数度量推导出输出概率解释,并引入基于脉冲的负对数似然度量,它更适合于分类任务,尤其是在能量效率和推理延迟方面。我们将我们的损失度量与其他现有备选方案进行比较,并在三个神经形态基准数据集上使用分类性能进行评估:NMNIST、DVS手势和N-TIDIGITS18。此外,我们在这些数据集上展示了最先进的性能,实现了更快的推理速度和更少的能耗。

英文摘要 Spiking Neural Networks~(SNNs) are a promising research paradigm for low power edge-based computing. Recent works in SNN backpropagation has enabled training of SNNs for practical tasks. However, since spikes are binary events in time, standard loss formulations are not directly compatible with spike output. As a result, current works are limited to using mean-squared loss of spike count. In this paper, we formulate the output probability interpretation from the spike count measure and introduce spike-based negative log-likelihood measure which are more suited for classification tasks especially in terms of the energy efficiency and inference latency. We compare our loss measures with other existing alternatives and evaluate using classification performances on three neuromorphic benchmark datasets: NMNIST, DVS Gesture and N-TIDIGITS18. In addition, we demonstrate state of the art performances on these datasets, achieving faster inference speed and less energy consumption.
注释 Accepted by IJCNN 2022
邮件日期 2022年05月23日

442、用于图像识别的时态神经形态编码器脉冲间隔的设计与数学建模

  • Design and Mathematical Modelling of Inter Spike Interval of Temporal Neuromorphic Encoder for Image Recognition 时间:2022年05月19日 第一作者:Aadhitiya VS 链接.

摘要:神经形态计算系统使用混合模式模拟或数字VLSI电路模拟生物神经系统的电生理行为。这些系统在执行认知任务时显示出卓越的准确性和能效。神经形态计算系统中使用的神经网络结构是类似于生物神经系统的脉冲神经网络(SNN)。SNN作为时间的函数在脉冲列车上运行。神经形态编码器将感觉数据转换为脉冲序列。本文实现了一种用于图像处理的低功耗神经形态编码器。还建立了图像像素与峰间间隔之间的数学模型。其中,获得像素和峰间间隔之间的指数关系。最后,通过电路仿真对数学方程进行了验证。

英文摘要 Neuromorphic computing systems emulate the electrophysiological behavior of the biological nervous system using mixed-mode analog or digital VLSI circuits. These systems show superior accuracy and power efficiency in carrying out cognitive tasks. The neural network architecture used in neuromorphic computing systems is spiking neural networks (SNNs) analogous to the biological nervous system. SNN operates on spike trains as a function of time. A neuromorphic encoder converts sensory data into spike trains. In this paper, a low-power neuromorphic encoder for image processing is implemented. A mathematical model between pixels of an image and the inter-spike intervals is also formulated. Wherein an exponential relationship between pixels and inter-spike intervals is obtained. Finally, the mathematical equation is validated with circuit simulation.
注释 4 pages, 6 figures, one table, IEEE ICEE 2020 conference proceeding
邮件日期 2022年05月20日

441、基于脉冲序列的关系表示学习

  • Relational representation learning with spike trains 时间:2022年05月18日 第一作者:Dominik Dold 链接.

摘要:关系表示学习最近受到了越来越多的关注,因为它可以灵活地建模各种系统,如相互作用的粒子、材料和工业项目,例如航天器的设计。处理关系数据的一种突出方法是知识图嵌入算法,其中知识图的实体和关系映射到低维向量空间,同时保留其语义结构。最近,提出了一种将图元素映射到脉冲神经网络时域的图嵌入方法。然而,它依赖于通过神经元群对图形元素进行编码,这些神经元群只出现一次脉冲。在这里,我们提出了一个模型,允许我们学习基于脉冲序列的知识图嵌入,通过充分利用脉冲模式的时域,每个图元素只需要一个神经元。这种编码方案可以在任意脉冲神经元模型上实现,只要可以计算脉冲时间的梯度,这一点我们在integrate和fire神经元模型上进行了证明。总的来说,本文的结果显示了关系知识如何集成到基于spike的系统中,为将基于事件的计算和关系数据合并以构建功能强大且节能的人工智能应用程序和推理系统开辟了可能性。

英文摘要 Relational representation learning has lately received an increase in interest due to its flexibility in modeling a variety of systems like interacting particles, materials and industrial projects for, e.g., the design of spacecraft. A prominent method for dealing with relational data are knowledge graph embedding algorithms, where entities and relations of a knowledge graph are mapped to a low-dimensional vector space while preserving its semantic structure. Recently, a graph embedding method has been proposed that maps graph elements to the temporal domain of spiking neural networks. However, it relies on encoding graph elements through populations of neurons that only spike once. Here, we present a model that allows us to learn spike train-based embeddings of knowledge graphs, requiring only one neuron per graph element by fully utilizing the temporal domain of spike patterns. This coding scheme can be implemented with arbitrary spiking neuron models as long as gradients with respect to spike times can be calculated, which we demonstrate for the integrate-and-fire neuron model. In general, the presented results show how relational knowledge can be integrated into spike-based systems, opening up the possibility of merging event-based computing and relational data to build powerful and energy efficient artificial intelligence applications and reasoning systems.
注释 Accepted for publication at the WCCI 2022 (IJCNN)
邮件日期 2022年05月20日

440、使用脉冲深度网的函数回归

  • Function Regression using Spiking DeepONet 时间:2022年05月17日 第一作者:Adar Kahana 链接.

摘要:深度学习的主要广泛应用之一是函数回归。然而,尽管现代神经网络体系结构具有较高的准确性和鲁棒性,但它需要大量的计算资源进行训练。缓解甚至解决这种效率低下的一种方法是从大脑中获得进一步的灵感,并以一种更具生物合理性的方式重新构建学习过程,开发出近年来越来越受欢迎的脉冲神经网络(SNN)。在本文中,我们提出了一种基于SNN的方法来执行回归,这是一个挑战,因为在将函数的输入域和连续输出值表示为峰值时存在固有的困难。我们使用DeepONet(设计用于学习操作符的神经网络)来学习脉冲的行为。然后,我们使用这种方法进行函数回归。我们提出了几种在spiking框架中使用DeepONet的方法,并给出了不同基准的精度和训练时间。

英文摘要 One of the main broad applications of deep learning is function regression. However, despite their demonstrated accuracy and robustness, modern neural network architectures require heavy computational resources to train. One method to mitigate or even resolve this inefficiency has been to draw further inspiration from the brain and reformulate the learning process in a more biologically-plausible way, developing what are known as Spiking Neural Networks (SNNs), which have been gaining traction in recent years. In this paper we present an SNN-based method to perform regression, which has been a challenge due to the inherent difficulty in representing a function's input domain and continuous output values as spikes. We use a DeepONet - neural network designed to learn operators - to learn the behavior of spikes. Then, we use this approach to do function regression. We propose several methods to use a DeepONet in the spiking framework, and present accuracy and training time for different benchmarks.
注释 15 pages, 5 figures and 4 tables
邮件日期 2022年05月23日

439、基于双相位优化的超低延迟无损ANN-SNN转换

  • Towards Lossless ANN-SNN Conversion under Ultra-Low Latency with Dual-Phase Optimization 时间:2022年05月16日 第一作者:Ziming Wang 链接.

摘要:具有异步离散事件的脉冲神经网络(SNN)具有较高的能量效率。实现深层SNN的一种流行方法是将ANN中的有效训练和SNN中的有效推理相结合的ANN-SNN转换。然而,以前的工作大多需要数千个时间步来实现无损转换。在本文中,我们首先确定了根本原因,即对SNN中的负剩余膜电位或溢出剩余膜电位的错误表述。此外,我们系统地分析了SNN和ANN之间的转换误差,并将其分解为三个方面:量化误差、剪裁误差和剩余膜电位表示误差。基于这样的见解,我们提出了一种双相位转换算法来最小化这些误差。因此,我们的模型在精度和精度延迟方面与深层架构(ResNet和VGG net)实现了SOTA权衡。具体而言,与最新结果相比,我们报告的SOTA精度在16$\倍的加速比内。同时,无损转换的推理性能至少快2$\倍。

英文摘要 Spiking neural network (SNN) operating with asynchronous discrete events shows higher energy efficiency. A popular approach to implement deep SNNs is ANN-SNN conversion combining both efficient training in ANNs and efficient inference in SNNs. However, the previous works mostly required thousands of time steps to achieve lossless conversion. In this paper, we first identify the underlying cause, i.e., misrepresentation of the negative or overflow residual membrane potential in SNNs. Furthermore, we systematically analyze the conversion error between SNNs and ANNs, and then decompose it into three folds: quantization error, clipping error, and residual membrane potential representation error. With such insights, we propose a dual-phase conversion algorithm to minimize those errors. As a result, our model achieves SOTA in both accuracy and accuracy-delay tradeoff with deep architectures (ResNet and VGG net). Specifically, we report SOTA accuracy within 16$\times$ speedup compared with the latest results. Meanwhile, lossless conversion is performed with at least 2$\times$ faster reasoning performance.
邮件日期 2022年05月17日

438、皮质微电路的计算框架近似符号协调随机反向传播

  • A Computational Framework of Cortical Microcircuits Approximates Sign-concordant Random Backpropagation 时间:2022年05月15日 第一作者:Yukun Yang 链接.

摘要:最近的几项研究试图解决众所周知的反向传播(BP)方法在生物学上的不真实性。虽然反馈对齐、直接反馈对齐及其变体(如符号协调反馈对齐)等有前途的方法解决了BP的重量传输问题,但由于一系列其他未解决的问题,其有效性仍然存在争议。在这项工作中,我们回答了一个问题,即是否有可能仅根据神经科学中观察到的机制实现随机反向传播。我们提出了一个由新的微电路体系结构及其支持的Hebbian学习规则组成的假设框架。该微电路结构由三种类型的细胞和两种类型的突触连接组成,通过局部反馈连接计算和传播错误信号,并支持训练具有全局定义的脉冲错误函数的多层脉冲神经网络。我们利用Hebbian规则在局部隔室中操作来更新突触权重,并以生物学上合理的方式实现监督学习。最后,我们从优化的角度解释了所提出的框架,并展示了它与符号一致反馈对齐的等价性。所提议的框架在包括MNIST和CIFAR10在内的多个数据集上进行了基准测试,证明了有希望的BP可比精度。

英文摘要 Several recent studies attempt to address the biological implausibility of the well-known backpropagation (BP) method. While promising methods such as feedback alignment, direct feedback alignment, and their variants like sign-concordant feedback alignment tackle BP's weight transport problem, their validity remains controversial owing to a set of other unsolved issues. In this work, we answer the question of whether it is possible to realize random backpropagation solely based on mechanisms observed in neuroscience. We propose a hypothetical framework consisting of a new microcircuit architecture and its supporting Hebbian learning rules. Comprising three types of cells and two types of synaptic connectivity, the proposed microcircuit architecture computes and propagates error signals through local feedback connections and supports the training of multi-layered spiking neural networks with a globally defined spiking error function. We employ the Hebbian rule operating in local compartments to update synaptic weights and achieve supervised learning in a biologically plausible manner. Finally, we interpret the proposed framework from an optimization point of view and show its equivalence to sign-concordant feedback alignment. The proposed framework is benchmarked on several datasets including MNIST and CIFAR10, demonstrating promising BP-comparable accuracy.
邮件日期 2022年05月17日

437、基于梯度重加权的脉冲神经网络时间有效训练

  • Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting 时间:2022年05月15日 第一作者:Shikuang Deng 链接.
注释 Published as a conference paper at ICLR 2022
邮件日期 2022年05月17日

436、深SNN中MaxPooling操作的脉冲近似

  • Spiking Approximations of the MaxPooling Operation in Deep SNNs 时间:2022年05月14日 第一作者:Ramashish Gaurav 链接.

摘要:脉冲神经网络(SNN)是一个新兴的受生物启发的神经网络领域,已显示出低功耗AI的前景。有许多方法可以构建深层SNN,其中人工神经网络(ANN)到SNN的转换非常成功。卷积神经网络(CNN)中的MaxPooling层是对中间特征映射进行降采样并引入平移不变性的一个组成部分,但由于缺乏硬件友好的脉冲等价物,限制了此类CNN向深层SNN的转换。在本文中,我们提出了两种硬件友好的方法来实现深度SNN中的最大池,从而便于将具有最大池层的CNN轻松转换为SNN。首先,我们还在Intel的Loihi神经形态硬件(使用MNIST、FMNIST和CIFAR10数据集)上执行具有脉冲MaxPooling层的SNN;从而说明了我们方法的可行性。

英文摘要 Spiking Neural Networks (SNNs) are an emerging domain of biologically inspired neural networks that have shown promise for low-power AI. A number of methods exist for building deep SNNs, with Artificial Neural Network (ANN)-to-SNN conversion being highly successful. MaxPooling layers in Convolutional Neural Networks (CNNs) are an integral component to downsample the intermediate feature maps and introduce translational invariance, but the absence of their hardware-friendly spiking equivalents limits such CNNs' conversion to deep SNNs. In this paper, we present two hardware-friendly methods to implement Max-Pooling in deep SNNs, thus facilitating easy conversion of CNNs with MaxPooling layers to SNNs. In a first, we also execute SNNs with spiking-MaxPooling layers on Intel's Loihi neuromorphic hardware (with MNIST, FMNIST, & CIFAR10 dataset); thus, showing the feasibility of our approach.
注释 Accepted in IJCNN-2022
邮件日期 2022年05月17日

435、SpiNNaker海马CA3区生物激发记忆的棘波计算模型

  • Spike-based computational models of bio-inspired memories in the hippocampal CA3 region on SpiNNaker 时间:2022年05月10日 第一作者:Daniel Casanueva-Morato 链接.

摘要:人脑是当今存在的最强大、最高效的机器,在许多方面超过了现代计算机的能力。目前,神经形态工程学的研究领域正在尝试开发模拟大脑功能的硬件,以获得这些卓越的功能。其中一个仍在发展中的领域是仿生记忆的设计,海马体在其中发挥着重要作用。大脑的这一区域起着短期记忆的作用,能够存储大脑中不同感觉流的信息关联,并在以后回忆它们。这是可能的,这要归功于构成海马主要亚区CA3的循环侧支网络结构。在这项工作中,我们开发了两个基于棘波的全功能海马仿生记忆计算模型,用于在SpiNNaker硬件平台上使用棘波神经网络实现复杂模式的存储和回忆。这些模型呈现了不同层次的生物抽象,第一个模型具有更接近生物模型的恒定振荡活性,第二个模型具有节能调节活性,尽管它仍然是受生物启发的,但选择了更具功能性的方法。为了测试他们的学习/回忆能力,对每个模型进行了不同的实验。对所提出模型的功能性和生物学合理性进行了全面比较,显示了它们的优缺点。这两种模型可供研究人员公开使用,可为未来基于峰值的实现和应用铺平道路。

英文摘要 The human brain is the most powerful and efficient machine in existence today, surpassing in many ways the capabilities of modern computers. Currently, lines of research in neuromorphic engineering are trying to develop hardware that mimics the functioning of the brain to acquire these superior capabilities. One of the areas still under development is the design of bio-inspired memories, where the hippocampus plays an important role. This region of the brain acts as a short-term memory with the ability to store associations of information from different sensory streams in the brain and recall them later. This is possible thanks to the recurrent collateral network architecture that constitutes CA3, the main sub-region of the hippocampus. In this work, we developed two spike-based computational models of fully functional hippocampal bio-inspired memories for the storage and recall of complex patterns implemented with spiking neural networks on the SpiNNaker hardware platform. These models present different levels of biological abstraction, with the first model having a constant oscillatory activity closer to the biological model, and the second one having an energy-efficient regulated activity, which, although it is still bio-inspired, opts for a more functional approach. Different experiments were performed for each of the models, in order to test their learning/recalling capabilities. A comprehensive comparison between the functionality and the biological plausibility of the presented models was carried out, showing their strengths and weaknesses. The two models, which are publicly available for researchers, could pave the way for future spike-based implementations and applications.
注释 9 pages, 6 figures, 1 table, conference, IJCNN 2022, accepted for publication
邮件日期 2022年05月11日

434、基于梯度重加权的脉冲神经网络时间有效训练

  • Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting 时间:2022年05月10日 第一作者:Shikuang Deng 链接.
注释 Published as a conference paper at ICLR 2022
邮件日期 2022年05月11日

433、基于脉冲神经网络的汽车事件数据目标检测

  • Object Detection with Spiking Neural Networks on Automotive Event Data 时间:2022年05月09日 第一作者:Lo"ic Cordone 链接.

摘要:汽车嵌入式算法在延迟、准确性和功耗方面有很高的限制。在这项工作中,我们建议直接对来自事件摄影机的数据训练脉冲神经网络(SNN),以设计快速高效的汽车嵌入式应用程序。事实上,SNN是一种更具生物真实感的神经网络,其中神经元使用离散和异步脉冲进行通信,这是一种自然节能且对硬件友好的操作模式。因此,在空间和时间上都是二进制且稀疏的事件数据是脉冲神经网络的理想输入。但到目前为止,它们的性能还不足以解决汽车实际问题,例如在不受控制的环境中检测复杂物体。为了解决这个问题,我们利用棘波反向传播方面的最新进展(替代梯度学习、参数LIF、棘波果冻框架)和我们新的事件编码,基于流行的深度学习网络训练4种不同的SNN:挤压网、VGG、MobileNet和DenseNet。因此,我们设法增加了文献中通常考虑的SNN的大小和复杂性。在本文中,我们对两个汽车事件数据集进行了实验,为脉冲神经网络建立了最新的分类结果。基于这些结果,我们将SNN与SSD相结合,提出了第一个能够在复杂的GEN1汽车检测事件数据集上执行目标检测的脉冲神经网络。

英文摘要 Automotive embedded algorithms have very high constraints in terms of latency, accuracy and power consumption. In this work, we propose to train spiking neural networks (SNNs) directly on data coming from event cameras to design fast and efficient automotive embedded applications. Indeed, SNNs are more biologically realistic neural networks where neurons communicate using discrete and asynchronous spikes, a naturally energy-efficient and hardware friendly operating mode. Event data, which are binary and sparse in space and time, are therefore the ideal input for spiking neural networks. But to date, their performance was insufficient for automotive real-world problems, such as detecting complex objects in an uncontrolled environment. To address this issue, we took advantage of the latest advancements in matter of spike backpropagation - surrogate gradient learning, parametric LIF, SpikingJelly framework - and of our new \textit{voxel cube} event encoding to train 4 different SNNs based on popular deep learning networks: SqueezeNet, VGG, MobileNet, and DenseNet. As a result, we managed to increase the size and the complexity of SNNs usually considered in the literature. In this paper, we conducted experiments on two automotive event datasets, establishing new state-of-the-art classification results for spiking neural networks. Based on these results, we combined our SNNs with SSD to propose the first spiking neural networks capable of performing object detection on the complex GEN1 Automotive Detection event dataset.
注释 Accepted to the International Joint Conference on Neural Networks (IJCNN) 2022
邮件日期 2022年05月10日

432、SpiNNaker上使用脉冲神经网络执行逻辑运算的基于脉冲的构建块

  • Spike-based building blocks for performing logic operations using Spiking Neural Networks on SpiNNaker 时间:2022年05月09日 第一作者:Alvaro Ayuso-Martinez 链接.

摘要:其中一个最有趣且仍在不断发展的科学领域是神经形态工程学,它专注于研究和设计硬件和软件,目的是模仿生物神经系统的基本原理。目前,有许多研究小组基于神经科学知识开发实际应用。这项工作为研究人员提供了一种基于脉冲神经网络的新型构建块工具包,该网络模拟不同逻辑门的行为。由于逻辑门是数字电路的基础,因此这些在许多基于脉冲的应用中可能非常有用。提出的设计和模型在SpiNNaker硬件平台上进行了介绍和实现。为了验证预期行为,进行了不同的实验,并对所得结果进行了讨论。研究了传统逻辑门和所提出的模块的功能,并讨论了所提出方法的可行性。

英文摘要 One of the most interesting and still growing scientific fields is neuromorphic engineering, which is focused on studying and designing hardware and software with the purpose of mimicking the basic principles of biological nervous systems. Currently, there are many research groups developing practical applications based on neuroscientific knowledge. This work provides researchers with a novel toolkit of building blocks based on Spiking Neural Networks that emulate the behavior of different logic gates. These could be very useful in many spike-based applications, since logic gates are the basis of digital circuits. The designs and models proposed are presented and implemented on a SpiNNaker hardware platform. Different experiments were performed in order to validate the expected behavior, and the obtained results are discussed. The functionality of traditional logic gates and the proposed blocks is studied, and the feasibility of the presented approach is discussed.
注释 9 pages, 9 figures, 1 table, conference, IJCNN 2022, accepted for publication
邮件日期 2022年05月10日

431、IM/DD光通信中的脉冲神经网络均衡

  • Spiking Neural Network Equalization for IM/DD Optical Communication 时间:2022年05月09日 第一作者:Elias Arnold 链接.

摘要:针对IM/DD链路,设计了一种适用于电子神经形态硬件的脉冲神经网络(SNN)均衡器模型。SNN实现了与人工神经网络相同的误码率,优于线性均衡。

英文摘要 A spiking neural network (SNN) equalizer model suitable for electronic neuromorphic hardware is designed for an IM/DD link. The SNN achieves the same bit-error-rate as an artificial neural network, outperforming linear equalization.
邮件日期 2022年05月10日

430、加速神经形态基底上脉冲神经网络的通用仿真

  • Versatile emulation of spiking neural networks on an accelerated neuromorphic substrate 时间:2022年05月09日 第一作者:Sebastian Billaudelle 链接.
邮件日期 2022年05月10日

429、突发脉冲神经网络的高效精确转换

  • Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes 时间:2022年05月08日 第一作者:Yang Li 链接.
注释 This paper was accepted by IJCAI2022
邮件日期 2022年05月10日

428、通过参数标定将人工神经网络转换为脉冲神经网络

  • Converting Artificial Neural Networks to Spiking Neural Networks via Parameter Calibration 时间:2022年05月06日 第一作者:Yuhang Li 链接.

摘要:脉冲神经网络(SNN)起源于生物学中的神经行为,是公认的下一代神经网络之一。传统上,SNN可以通过将预先训练的人工神经网络(ANN)转换为脉冲神经元来代替非线性激活,而无需改变参数。在这项工作中,我们认为简单地将ANN的权重复制和粘贴到SNN中不可避免地会导致激活不匹配,尤其是对于使用批量归一化(BN)层训练的ANN。为了解决激活不匹配问题,我们首先通过将局部转换误差分解为剪裁误差和地板误差进行理论分析,然后使用二阶分析定量测量该误差如何在整个层中传播。受理论结果的启发,我们提出了一套分层参数校准算法,通过调整参数来最小化激活失配。在现代体系结构和大规模任务(包括ImageNet分类和MS COCO检测)上对所提出的算法进行了广泛的实验。我们证明,我们的方法可以处理带有批量归一化层的SNN转换,并且即使在32个时间步内也能有效地保持较高的精度。例如,当使用BN层转换VGG-16时,我们的校准算法可以提高高达65%的精度。

英文摘要 Spiking Neural Network (SNN), originating from the neural behavior in biology, has been recognized as one of the next-generation neural networks. Conventionally, SNNs can be obtained by converting from pre-trained Artificial Neural Networks (ANNs) by replacing the non-linear activation with spiking neurons without changing the parameters. In this work, we argue that simply copying and pasting the weights of ANN to SNN inevitably results in activation mismatch, especially for ANNs that are trained with batch normalization (BN) layers. To tackle the activation mismatch issue, we first provide a theoretical analysis by decomposing local conversion error to clipping error and flooring error, and then quantitatively measure how this error propagates throughout the layers using the second-order analysis. Motivated by the theoretical results, we propose a set of layer-wise parameter calibration algorithms, which adjusts the parameters to minimize the activation mismatch. Extensive experiments for the proposed algorithms are performed on modern architectures and large-scale tasks including ImageNet classification and MS COCO detection. We demonstrate that our method can handle the SNN conversion with batch normalization layers and effectively preserve the high accuracy even in 32 time steps. For example, our calibration algorithms can increase up to 65% accuracy when converting VGG-16 with BN layers.
注释 arXiv admin note: text overlap with arXiv:2106.06984
邮件日期 2022年05月23日

427、脉冲图卷积网络

  • Spiking Graph Convolutional Networks 时间:2022年05月05日 第一作者:Zulun Zhu 链接.

摘要:图卷积网络(GCN)由于在学习图信息时具有显著的表示能力而获得了令人印象深刻的性能。然而,当在深度网络上实施GCN时,需要昂贵的计算能力,使其难以部署在电池供电的设备上。相比之下,执行生物保真度推理过程的脉冲神经网络(SNN)提供了一种节能的神经架构。在这项工作中,我们提出了SPIKINGCN,这是一个端到端的框架,旨在将GCN的嵌入与SNN的生物相似性特征相结合。原始图形数据基于图形卷积的合并被编码成脉冲序列。我们进一步利用与神经元节点相结合的完全连接层来模拟生物信息处理。在广泛的场景中(例如引文网络、图像图分类和推荐系统),我们的实验结果表明,所提出的方法可以获得与最先进方法相比的竞争性能。此外,我们还表明,在神经形态芯片上添加GCN可以在图形数据分析中带来明显的能效优势,这表明它在构建环境友好的机器学习模型方面具有巨大潜力。

英文摘要 Graph Convolutional Networks (GCNs) achieve an impressive performance due to the remarkable representation ability in learning the graph information. However, GCNs, when implemented on a deep network, require expensive computation power, making them difficult to be deployed on battery-powered devices. In contrast, Spiking Neural Networks (SNNs), which perform a bio-fidelity inference process, offer an energy-efficient neural architecture. In this work, we propose SpikingGCN, an end-to-end framework that aims to integrate the embedding of GCNs with the biofidelity characteristics of SNNs. The original graph data are encoded into spike trains based on the incorporation of graph convolution. We further model biological information processing by utilizing a fully connected layer combined with neuron nodes. In a wide range of scenarios (e.g. citation networks, image graph classification, and recommender systems), our experimental results show that the proposed method could gain competitive performance against state-of-the-art approaches. Furthermore, we show that SpikingGCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis, which demonstrates its great potential to construct environment-friendly machine learning models.
注释 Accepted by IJCAI 2022; Code available at https://github.com/ZulunZhu/SpikingGCN
邮件日期 2022年05月06日

426、基于模拟脉冲神经网络的声场景分析

  • Acoustic Scene Analysis using Analog Spiking Neural Network 时间:2022年05月03日 第一作者:An 链接.
注释 21 pages, Journal
邮件日期 2022年05月04日

425、利用灵长类视觉皮层脉冲神经网络特征的显著性图

  • Saliency map using features derived from spiking neural networks of primate visual cortex 时间:2022年05月02日 第一作者:Reza Hojjaty Saeedy 链接.

摘要:我们提出了一个受生物视觉系统启发的框架来生成数字图像的显著性图。使用了视觉皮层中专门用于颜色和方向感知的区域的感受野的著名计算模型。为了模拟这些区域之间的连接,我们使用了CARLsim库,这是一个脉冲神经网络(SNN)模拟器。CARLsim生成的脉冲,然后作为提取的特征并输入到我们的显著性检测算法中。描述了这种新的显著性检测方法,并将其应用于基准图像。

英文摘要 We propose a framework inspired by biological vision systems to produce saliency maps of digital images. Well-known computational models for receptive fields of areas in the visual cortex that are specialized for color and orientation perception are used. To model the connectivity between these areas we use the CARLsim library which is a spiking neural network(SNN) simulator. The spikes generated by CARLsim, then serve as extracted features and input to our saliency detection algorithm. This new method of saliency detection is described and applied to benchmark images.
注释 19 pages, 8 figures, 1 table
邮件日期 2022年05月04日

424、一种优化的无梯度深脉冲神经网络结构

  • An optimised deep spiking neural network architecture without gradients 时间:2022年05月02日 第一作者:Yeshwanth Bethi 链接.
注释 18 pages, 6 figures ACM-class: I.2.6; I.5.1
邮件日期 2022年05月04日

423、基于片上可塑性的Loihi序列学习与整合

  • Sequence Learning and Consolidation on Loihi using On-chip Plasticity 时间:2022年05月02日 第一作者:Jack Lindsey 链接.

摘要:在这项工作中,我们开发了一个神经形态硬件上的预测学习模型。我们的模型使用Loihi芯片的芯片可塑性能力来记忆观察到的事件序列,并使用该存储器实时生成对未来事件的预测。考虑到芯片塑性规则的局部性约束,在不干扰正在进行的学习过程的情况下生成预测是非常重要的。我们以海马回放为灵感,采用记忆巩固方法来应对这一挑战。序列存储器使用脉冲时间依赖性可塑性存储在初始存储器模块中。之后,在离线期间,记忆被整合到一个独特的预测模块中。然后,第二个模块能够表示预测的未来事件,而不干扰第一个模块中的活动和可塑性,从而实现预测和地面真相观测之间的在线比较。我们的模型证明了在线预测学习模型可以部署在具有片上可塑性的神经形态硬件上。

英文摘要 In this work we develop a model of predictive learning on neuromorphic hardware. Our model uses the on-chip plasticity capabilities of the Loihi chip to remember observed sequences of events and use this memory to generate predictions of future events in real time. Given the locality constraints of on-chip plasticity rules, generating predictions without interfering with the ongoing learning process is nontrivial. We address this challenge with a memory consolidation approach inspired by hippocampal replay. Sequence memory is stored in an initial memory module using spike-timing dependent plasticity. Later, during an offline period, memories are consolidated into a distinct prediction module. This second module is then able to represent predicted future events without interfering with the activity, and plasticity, in the first module, enabling online comparison between predictions and ground-truth observations. Our model serves as a proof-of-concept that online predictive learning models can be deployed on neuromorphic hardware with on-chip plasticity.
注释 NICE 2022
邮件日期 2022年05月03日

422、用于目标检测的稀疏压缩脉冲神经网络加速器

  • Sparse Compressed Spiking Neural Network Accelerator for Object Detection 时间:2022年05月02日 第一作者:Hong-Han Lien 链接.

摘要:受人脑启发的脉冲神经网络(Spiking neural networks,SNN)由于其相对简单、低功耗的硬件传输二进制脉冲和高度稀疏的激活图,最近得到了广泛的应用。然而,由于SNN包含额外的时间维度信息,SNN加速器将需要更多的缓冲区,并需要更长的时间来推断,尤其是对于更困难的高分辨率目标检测任务。因此,本文提出了一种稀疏压缩脉冲神经网络加速器,该加速器利用了激活映射和权重的高度稀疏性,通过利用所提出的选通一对所有乘积来实现低功耗和高度并行的模型执行。神经网络的实验结果显示,在IVS 3cls数据集上,71.5$%$映射具有混合(1,3)时间步长。采用TSMC 28nm CMOS工艺的加速器可达到1024$\倍$576@29以500MHz频率运行时每秒处理帧数,能效为35.88TOPS/W,每帧能耗为1.05mJ。

英文摘要 Spiking neural networks (SNNs), which are inspired by the human brain, have recently gained popularity due to their relatively simple and low-power hardware for transmitting binary spikes and highly sparse activation maps. However, because SNNs contain extra time dimension information, the SNN accelerator will require more buffers and take longer to infer, especially for the more difficult high-resolution object detection task. As a result, this paper proposes a sparse compressed spiking neural network accelerator that takes advantage of the high sparsity of activation maps and weights by utilizing the proposed gated one-to-all product for low power and highly parallel model execution. The experimental result of the neural network shows 71.5$\%$ mAP with mixed (1,3) time steps on the IVS 3cls dataset. The accelerator with the TSMC 28nm CMOS process can achieve 1024$\times$576@29 frames per second processing when running at 500MHz with 35.88TOPS/W energy efficiency and 1.05mJ energy consumption per frame.
注释 11 pages, 18 figures, to be published in IEEE Transactions on Circuits and Systems--I: Regular Papers ACM-class: B.5.m DOI: 10.1109/TCSI.2022.3149006
邮件日期 2022年05月03日

421、脉冲神经网络的最新进展和新前沿

  • Recent Advances and New Frontiers in Spiking Neural Networks 时间:2022年05月02日 第一作者:Duzhen Zhang 链接.
注释 Accepted at IJCAI2022
邮件日期 2022年05月03日

420、利用脉冲表征微分训练高性能低潜伏期脉冲神经网络

  • Training High-Performance Low-Latency Spiking Neural Networks by Differentiation on Spike Representation 时间:2022年05月01日 第一作者:Qingyan Meng 链接.

摘要:当在神经形态硬件上实现时,脉冲神经网络(SNN)是一种很有前途的节能人工智能模型。然而,由于SNN的不可微性,有效地训练SNN是一个挑战。大多数现有方法要么存在高延迟(即长模拟时间步长),要么无法实现人工神经网络(ANN)那样的高性能。在本文中,我们提出了基于脉冲表示的差分(DSR)方法,该方法可以在低延迟的情况下获得与人工神经网络竞争的高性能。首先,我们使用(加权)发射率编码将脉冲序列编码为脉冲表示。在脉冲表示的基础上,我们系统地推导了具有常见神经模型的脉冲动态可以表示为一些次可微映射。基于这种观点,我们提出的DSR方法通过映射的梯度来训练SNN,避免了SNN训练中常见的不可微性问题。然后,我们分析了用SNN的正向计算表示特定映射时的误差。为了减少这种误差,我们建议在每一层中训练脉冲阈值,并为神经模型引入一个新的超参数。有了这些组件,DSR方法可以在静态和神经形态数据集(包括CIFAR-10、CIFAR-100、ImageNet和DVS-CIFAR10)上以低延迟实现最先进的SNN性能。

英文摘要 Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware. However, it is a challenge to efficiently train SNNs due to their non-differentiability. Most existing methods either suffer from high latency (i.e., long simulation time steps), or cannot achieve as high performance as Artificial Neural Networks (ANNs). In this paper, we propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance that is competitive to ANNs yet with low latency. First, we encode the spike trains into spike representation using (weighted) firing rate coding. Based on the spike representation, we systematically derive that the spiking dynamics with common neural models can be represented as some sub-differentiable mapping. With this viewpoint, our proposed DSR method trains SNNs through gradients of the mapping and avoids the common non-differentiability problem in SNN training. Then we analyze the error when representing the specific mapping with the forward computation of the SNN. To reduce such error, we propose to train the spike threshold in each layer, and to introduce a new hyperparameter for the neural models. With these components, the DSR method can achieve state-of-the-art SNN performance with low latency on both static and neuromorphic datasets, including CIFAR-10, CIFAR-100, ImageNet, and DVS-CIFAR10.
注释 Accepted by CVPR 2022
邮件日期 2022年05月03日

419、具有突发脉冲的脉冲神经网络的高效精确转换

  • Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes 时间:2022年04月28日 第一作者:Yang Li 链接.

摘要:脉冲神经网络(SNN)作为一种受大脑启发的节能神经网络,引起了研究人员的兴趣。而脉冲神经网络的训练仍然是一个悬而未决的问题。一种有效的方法是将训练后的神经网络的权重映射到SNN,以获得较高的推理能力。然而,转换后的脉冲神经网络往往会出现性能下降和相当大的时延。为了加快推理过程并获得更高的精度,我们从三个角度对转换过程中的错误进行了理论分析:IF和ReLU之间的差异、时间维度和池运算。我们提出了一种释放突发脉冲的神经元模型,这是一种廉价但高效的解决剩余信息的方法。此外,还提出了侧向抑制池(LIPooling)来解决最大池在转换过程中造成的不准确问题。在CIFAR和ImageNet上的实验结果表明,我们的算法是有效和准确的。例如,我们的方法可以确保SNN的几乎无损转换,在0.693$\倍于典型方法能耗的情况下,仅使用大约1/10(小于100)的模拟时间。我们的代码可在https://github.com/Brain-Inspired-Cognitive-Engine/Conversion_Burst.

英文摘要 Spiking neural network (SNN), as a brain-inspired energy-efficient neural network, has attracted the interest of researchers. While the training of spiking neural networks is still an open problem. One effective way is to map the weight of trained ANN to SNN to achieve high reasoning ability. However, the converted spiking neural network often suffers from performance degradation and a considerable time delay. To speed up the inference process and obtain higher accuracy, we theoretically analyze the errors in the conversion process from three perspectives: the differences between IF and ReLU, time dimension, and pooling operation. We propose a neuron model for releasing burst spikes, a cheap but highly efficient method to solve residual information. In addition, Lateral Inhibition Pooling (LIPooling) is proposed to solve the inaccuracy problem caused by MaxPooling in the conversion process. Experimental results on CIFAR and ImageNet demonstrate that our algorithm is efficient and accurate. For example, our method can ensure nearly lossless conversion of SNN and only use about 1/10 (less than 100) simulation time under 0.693$\times$ energy consumption of the typical method. Our code is available at https://github.com/Brain-Inspired-Cognitive-Engine/Conversion_Burst.
注释 This paper was accepted by IJCAI2022
邮件日期 2022年04月29日

418、脉冲神经网络的最新进展和新前沿

  • Recent Advances and New Frontiers in Spiking Neural Networks 时间:2022年04月25日 第一作者:Duzhen Zhang 链接.
注释 Accepted at IJCAI2022
邮件日期 2022年04月26日

417、MAP-SNN:将具有多样性、适应性和可塑性的脉冲活动映射到生物似脉冲神经网络

  • MAP-SNN: Mapping Spike Activities with Multiplicity, Adaptability, and Plasticity into Bio-Plausible Spiking Neural Networks 时间:2022年04月21日 第一作者:Chengting Yu 链接.

摘要:脉冲神经网络(SNN)模仿人脑的基本机制,因此被认为在生物学上更真实、更节能。最近,利用深度学习框架的基于反向传播(BP)的SNN学习算法取得了良好的性能。然而,在这些基于BP的算法中,生物可解释性部分被忽略。对于生物合理的基于BP的SNN,我们在模拟脉冲活动时考虑了三个属性:多样性、适应性和可塑性(MAP)。在多重性方面,我们提出了一种具有多脉冲传输的多脉冲模式(MSP),以增强离散时间迭代中的模型鲁棒性。为了实现适应性,我们在MSP下采用了脉冲频率自适应(SFA)来减少脉冲活动以提高效率。在可塑性方面,我们提出了一种可训练的卷积突触,该突触模拟脉冲反应电流,以增强脉冲神经元的多样性,用于时间特征提取。提出的SNN模型在神经形态数据集:N-MNIST和SHD上实现了竞争性性能。此外,实验结果表明,提出的三个方面对脉冲活动的迭代鲁棒性、脉冲效率和时间特征提取能力具有重要意义。综上所述,本研究提出了一种利用MAP进行生物激发棘波活动的可行方案,为将生物学特性嵌入棘波神经网络提供了一种新的神经形态学视角。

英文摘要 Spiking Neural Network (SNN) is considered more biologically realistic and power-efficient as it imitates the fundamental mechanism of the human brain. Recently, backpropagation (BP) based SNN learning algorithms that utilize deep learning frameworks have achieved good performance. However, bio-interpretability is partially neglected in those BP-based algorithms. Toward bio-plausible BP-based SNNs, we consider three properties in modeling spike activities: Multiplicity, Adaptability, and Plasticity (MAP). In terms of multiplicity, we propose a Multiple-Spike Pattern (MSP) with multiple spike transmission to strengthen model robustness in discrete time-iteration. To realize adaptability, we adopt Spike Frequency Adaption (SFA) under MSP to decrease spike activities for improved efficiency. For plasticity, we propose a trainable convolutional synapse that models spike response current to enhance the diversity of spiking neurons for temporal feature extraction. The proposed SNN model achieves competitive performances on neuromorphic datasets: N-MNIST and SHD. Furthermore, experimental results demonstrate that the proposed three aspects are significant to iterative robustness, spike efficiency, and temporal feature extraction capability of spike activities. In summary, this work proposes a feasible scheme for bio-inspired spike activities with MAP, offering a new neuromorphic perspective to embed biological characteristics into spiking neural networks.
邮件日期 2022年04月22日

416、轴突延迟作为前馈深脉冲神经网络的短期记忆

  • Axonal Delay As a Short-Term Memory for Feed Forward Deep Spiking Neural Networks 时间:2022年04月20日 第一作者:Pengfei Sun 链接.

摘要:脉冲神经网络(SNN)的信息通过脉冲在相邻的生物神经元之间传播,这为模拟人脑提供了一种计算范式。最近的研究发现,神经元的时间延迟在学习过程中起着重要作用。因此,配置脉冲的精确定时对于理解和改进SNN中时间信息的传输过程是一个有希望的方向。然而,现有的棘突神经元学习方法大多侧重于突触重量的调节,而对轴突延迟的研究很少。在本文中,我们验证了将时间延迟整合到监督学习中的有效性,并提出了一个通过短期记忆调节轴突延迟的模块。为此,将校正轴突延迟(RAD)模块与脉冲模型集成,以调整脉冲时间,从而提高时间特征的表征学习能力。在三个神经形态基准数据集NMNIST、DVS手势和N-TIDIGITS18上的实验表明,该方法在使用最少参数的情况下达到了最先进的性能。

英文摘要 The information of spiking neural networks (SNNs) are propagated between the adjacent biological neuron by spikes, which provides a computing paradigm with the promise of simulating the human brain. Recent studies have found that the time delay of neurons plays an important role in the learning process. Therefore, configuring the precise timing of the spike is a promising direction for understanding and improving the transmission process of temporal information in SNNs. However, most of the existing learning methods for spiking neurons are focusing on the adjustment of synaptic weight, while very few research has been working on axonal delay. In this paper, we verify the effectiveness of integrating time delay into supervised learning and propose a module that modulates the axonal delay through short-term memory. To this end, a rectified axonal delay (RAD) module is integrated with the spiking model to align the spike timing and thus improve the characterization learning ability of temporal features. Experiments on three neuromorphic benchmark datasets : NMNIST, DVS Gesture and N-TIDIGITS18 show that the proposed method achieves the state-of-the-art performance while using the fewest parameters.
注释 Accepted at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022
邮件日期 2022年05月05日

415、超导光电单光子突触的演示

  • Demonstration of Superconducting Optoelectronic Single-Photon Synapses 时间:2022年04月20日 第一作者:Saeed Khan 链接.

摘要:超导光电硬件正在探索一条通向人工脉冲神经网络的道路,它具有前所未有的复杂性和计算能力。这种硬件将用于少光子、光速通信的集成光子组件与用于快速、节能计算的超导电路相结合。超导和光子器件的单片集成对于该技术的扩展是必要的。在目前的工作中,超导纳米线单光子探测器首次与约瑟夫森结单片集成,实现了超导光电突触。我们提出了对单光子突触前信号进行模拟加权和时间泄漏积分的电路。突触加权是在电子领域实现的,这样就可以维持二进制单光子通信。最近突触活动的记录以电流的形式存储在超导回路中。树突和神经元的非线性通过约瑟夫森电路的第二阶段实现。该硬件具有极大的设计灵活性,具有跨越四个数量级(数百纳秒到毫秒)的突触时间常数。突触对超过10 MHz的突触前峰值频率有反应,在考虑冷却之前,每个突触事件消耗大约33 aJ的动态功率。除了神经形态硬件外,这些电路还为实现用于各种成像、传感和量子通信应用的大规模单光子探测器阵列提供了新的途径。

英文摘要 Superconducting optoelectronic hardware is being explored as a path towards artificial spiking neural networks with unprecedented scales of complexity and computational ability. Such hardware combines integrated-photonic components for few-photon, light-speed communication with superconducting circuits for fast, energy-efficient computation. Monolithic integration of superconducting and photonic devices is necessary for the scaling of this technology. In the present work, superconducting-nanowire single-photon detectors are monolithically integrated with Josephson junctions for the first time, enabling the realization of superconducting optoelectronic synapses. We present circuits that perform analog weighting and temporal leaky integration of single-photon presynaptic signals. Synaptic weighting is implemented in the electronic domain so that binary, single-photon communication can be maintained. Records of recent synaptic activity are locally stored as current in superconducting loops. Dendritic and neuronal nonlinearities are implemented with a second stage of Josephson circuitry. The hardware presents great design flexibility, with demonstrated synaptic time constants spanning four orders of magnitude (hundreds of nanoseconds to milliseconds). The synapses are responsive to presynaptic spike rates exceeding 10 MHz and consume approximately 33 aJ of dynamic power per synapse event before accounting for cooling. In addition to neuromorphic hardware, these circuits introduce new avenues towards realizing large-scale single-photon-detector arrays for diverse imaging, sensing, and quantum communication applications.
注释 23 pages, 20 figures
邮件日期 2022年04月21日

414、脉冲神经网络的最新进展和新前沿

  • Recent Advances and New Frontiers in Spiking Neural Networks 时间:2022年04月17日 第一作者:Duzhen Zhang 链接.
注释 Accepted at IJCAI2022
邮件日期 2022年04月19日

413、Sapinet:一种用于野外学习的基于稀疏事件的时空振荡器

  • Sapinet: A sparse event-based spatiotemporal oscillator for learning in the wild 时间:2022年04月13日 第一作者:Ayon Borthakur 链接.

摘要:我们介绍了Sapinet——一种用于野外学习的基于脉冲时间(事件)的多层神经网络——也就是说:无灾难性遗忘的多输入一次性在线学习,无需数据特定的超参数重新调整。Sapinet的主要功能包括数据正则化、模型缩放、数据分类和去噪。该模型还支持刺激相似性映射。我们提出了一种系统的方法来调整网络的性能。我们研究了不同气味相似性、高斯噪声和脉冲噪声水平下的模型性能。Sapinet在标准机器嗅觉数据集上实现了高分类精度,无需对特定数据集进行微调。

英文摘要 We introduce Sapinet -- a spike timing (event)-based multilayer neural network for \textit{learning in the wild} -- that is: one-shot online learning of multiple inputs without catastrophic forgetting, and without the need for data-specific hyperparameter retuning. Key features of Sapinet include data regularization, model scaling, data classification, and denoising. The model also supports stimulus similarity mapping. We propose a systematic method to tune the network for performance. We studied the model performance on different levels of odor similarity, gaussian and impulse noise. Sapinet achieved high classification accuracies on standard machine olfaction datasets without the requirement of fine tuning for a specific dataset.
注释 PhD thesis Journal-ref: "Mechanisms and Architectural Priors for Learning in the Wild" (Cornell University, Ithaca, NY, USA). ProQuest Publication Number: 28652661. Submission Date: 2021-07-28
邮件日期 2022年04月14日

412、对抗干扰的鲁棒脉冲神经网络

  • Toward Robust Spiking Neural Network Against Adversarial Perturbation 时间:2022年04月12日 第一作者:Ling Liang 链接.

摘要:随着脉冲神经网络(SNN)越来越多地应用于现实世界中对效率至关重要的应用,SNN中的安全问题越来越受到关注。目前,研究人员已经证明SNN可以通过对抗性的例子进行攻击。如何构建一个健壮的SNN成为一个迫切的问题。近年来,许多研究将认证训练应用于人工神经网络(ANN),这可以很好地提高神经网络模型的鲁棒性。然而,由于SNN具有不同的神经元行为和输入格式,现有的认证无法直接转移到SNN。在这项工作中,我们首先设计了S-IBP和S-CROWN来处理SNN神经元建模中的非线性函数。然后,我们将数字和脉冲输入的边界形式化。最后,我们在不同的数据集和模型体系结构中证明了我们提出的鲁棒训练方法的有效性。根据我们的实验,我们可以实现最大37.7%$的攻击错误减少,原始准确度损失为3.7%$。据我们所知,这是关于SNN稳健训练的首次分析。

英文摘要 As spiking neural networks (SNNs) are deployed increasingly in real-world efficiency critical applications, the security concerns in SNNs attract more attention. Currently, researchers have already demonstrated an SNN can be attacked with adversarial examples. How to build a robust SNN becomes an urgent issue. Recently, many studies apply certified training in artificial neural networks (ANNs), which can improve the robustness of an NN model promisely. However, existing certifications cannot transfer to SNNs directly because of the distinct neuron behavior and input formats for SNNs. In this work, we first design S-IBP and S-CROWN that tackle the non-linear functions in SNNs' neuron modeling. Then, we formalize the boundaries for both digital and spike inputs. Finally, we demonstrate the efficiency of our proposed robust training method in different datasets and model architectures. Based on our experiment, we can achieve a maximum $37.7\%$ attack error reduction with $3.7\%$ original accuracy loss. To the best of our knowledge, this is the first analysis on robust training of SNNs.
邮件日期 2022年05月04日

411、利用剩余脉冲神经网络进行精确特征提取的关键

  • Keys to Accurate Feature Extraction Using Residual Spiking Neural Networks 时间:2022年04月12日 第一作者:Alex Vicente-Sola 链接.
注释 16 pages, 6 figures, 17 tables ACM-class: I.2.6; I.2.10; I.4.8; I.5.2; D.2.13
邮件日期 2022年04月14日

410、基于脉冲神经网络的面向功率的故障注入攻击分析

  • Analysis of Power-Oriented Fault Injection Attacks on Spiking Neural Networks 时间:2022年04月10日 第一作者:Karthikeyan Nagarajan 链接.

摘要:作为深度神经网络(DNN)的一种可行替代方案,脉冲神经网络(SNN)正在迅速获得关注。与DNN相比,SNN的计算能力更强,并提供更高的能效。SNN一出现就令人兴奋,但它包含安全敏感资产(如神经元阈值电压)和对手可以利用的漏洞(如分类精度对神经元阈值电压变化的敏感性)。我们通过使用外部电源和激光诱导的局部电源故障来破坏关键训练参数,如使用普通模拟神经元开发的SNN上的脉冲幅度和神经元膜阈值电位,来研究全局故障注入攻击。我们还评估了0%(即无攻击)到100%(即整个层受到攻击)的基于功率的攻击对单个SNN层的影响。我们研究了攻击对数字分类任务的影响,发现在最坏的情况下,分类准确率降低了85.65%。我们还提出了防御措施,例如,一种对面向电源的攻击免疫的强大电流驱动器设计,改进神经元组件的电路尺寸,以减少/恢复对抗性精度下降,而代价是可忽略的面积和25%的电源开销。我们还提出了一种基于虚拟神经元的电压故障注入检测系统,具有1%的功率和面积开销。

英文摘要 Spiking Neural Networks (SNN) are quickly gaining traction as a viable alternative to Deep Neural Networks (DNN). In comparison to DNNs, SNNs are more computationally powerful and provide superior energy efficiency. SNNs, while exciting at first appearance, contain security-sensitive assets (e.g., neuron threshold voltage) and vulnerabilities (e.g., sensitivity of classification accuracy to neuron threshold voltage change) that adversaries can exploit. We investigate global fault injection attacks by employing external power supplies and laser-induced local power glitches to corrupt crucial training parameters such as spike amplitude and neuron's membrane threshold potential on SNNs developed using common analog neurons. We also evaluate the impact of power-based attacks on individual SNN layers for 0% (i.e., no attack) to 100% (i.e., whole layer under attack). We investigate the impact of the attacks on digit classification tasks and find that in the worst-case scenario, classification accuracy is reduced by 85.65%. We also propose defenses e.g., a robust current driver design that is immune to power-oriented attacks, improved circuit sizing of neuron components to reduce/recover the adversarial accuracy degradation at the cost of negligible area and 25% power overhead. We also present a dummy neuron-based voltage fault injection detection system with 1% power and area overhead.
注释 Design, Automation and Test in Europe Conference (DATE) 2022
邮件日期 2022年04月12日

409、基于时域神经元的高能效高精度脉冲神经网络推理

  • Energy-Efficient High-Accuracy Spiking Neural Network Inference Using Time-Domain Neurons 时间:2022年04月10日 第一作者:Joonghyun Song 链接.
注释 Accepted in AICAS 2022
邮件日期 2022年04月12日

408、脉冲神经网络与人工神经网络:从生物智能到人工智能

  • An Introductory Review of Spiking Neural Network and Artificial Neural Network: From Biological Intelligence to Artificial Intelligence 时间:2022年04月09日 第一作者:Shengjie Zheng 链接.

摘要:最近,随着人工智能的快速发展,神经科学也取得了巨大的进步。人工智能在模式识别、机器人技术和生物信息学方面取得了巨大的成功。一种具有生物可解释性的脉冲神经网络正逐渐受到广泛关注,这种神经网络也被认为是通用人工智能的发展方向之一。本综述介绍了以下几个部分,脉冲神经元的生物学背景和理论基础,不同的神经元模型,神经回路的连通性,主流的神经网络学习机制和网络结构,这篇综述希望能吸引不同的研究人员,推动脑启发智能和人工智能的发展。

英文摘要 Recently, stemming from the rapid development of artificial intelligence, which has gained expansive success in pattern recognition, robotics, and bioinformatics, neuroscience is also gaining tremendous progress. A kind of spiking neural network with biological interpretability is gradually receiving wide attention, and this kind of neural network is also regarded as one of the directions toward general artificial intelligence. This review introduces the following sections, the biological background of spiking neurons and the theoretical basis, different neuronal models, the connectivity of neural circuits, the mainstream neural network learning mechanisms and network architectures, etc. This review hopes to attract different researchers and advance the development of brain-inspired intelligence and artificial intelligence.
注释 12 pages, 24 figures
邮件日期 2022年04月18日

407、一种实现强化学习的脉冲神经网络结构

  • A Spiking Neural Network Structure Implementing Reinforcement Learning 时间:2022年04月09日 第一作者:Mikhail Kiselev 链接.

摘要:目前,尽管提出了大量的脉冲神经网络学习算法,但在脉冲神经网络(SNN)中实现学习机制并不能被视为一个已解决的科学问题。对于强化学习(RL)的SNN实现也是如此,而RL对于SNN尤其重要,因为它与从SNN应用的角度来看最有前途的领域(如机器人)密切相关。在本文中,我描述了一种SNN结构,它似乎可以用于广泛的RL任务。我的方法的显著特点是只使用所有相关信号的脉冲形式——感官输入流、发送到执行器的输出信号和奖惩信号。除此之外,选择神经元/可塑性模型时,我的指导原则是,它们应易于在现代神经芯片上实现。本文考虑的SNN结构包括由LIFAT(具有自适应阈值的漏积分和激发神经元)模型的推广和一个简单的棘波时间依赖性突触可塑性模型(多巴胺调制可塑性的推广)描述的棘波神经元。我的概念是基于关于RL任务特征的非常普遍的假设,对其适用性没有明显的限制。为了测试它,我选择了一个简单但不平凡的任务,训练网络在模拟DVS摄像机的视野中保持一个无序移动的光点。通过所描述的SNN成功地解决了这个RL问题可以被认为是有利于我的方法效率的证据。

英文摘要 At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, I describe an SNN structure which, seemingly, can be used in wide range of RL tasks. The distinctive feature of my approach is usage of only the spike forms of all signals involved - sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selecting the neuron/plasticity models, I was guided by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). My concept is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability. To test it, I selected a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated DVS camera. Successful solution of this RL problem by the SNN described can be considered as evidence in favor of efficiency of my approach.
邮件日期 2022年04月12日

406、基于星形胶质细胞神经网络的容错计算设计方法

  • A Design Methodology for Fault-Tolerant Computing using Astrocyte Neural Networks 时间:2022年04月06日 第一作者:Murat I\c{s}{\i}k 链接.

摘要:我们提出了一种设计方法来促进深度学习模型的容错性。首先,我们实现了一种多核容错神经形态硬件设计,其中每个神经形态核心中的神经元和突触回路由星形胶质细胞回路包围,星形胶质细胞是大脑中的星形胶质细胞,通过使用闭环逆行反馈信号恢复故障神经元的脉冲放电频率,从而促进自我修复。接下来,我们在深入学习模型中引入星形胶质细胞,以达到所需的硬件故障容忍度。最后,我们使用系统软件将启用星形胶质细胞的模型划分为多个簇,并在所提出的容错神经形态设计上实现它们。我们使用七种深度学习推理模型对这种设计方法进行了评估,结果表明它既节省面积又节省能源。

英文摘要 We propose a design methodology to facilitate fault tolerance of deep learning models. First, we implement a many-core fault-tolerant neuromorphic hardware design, where neuron and synapse circuitries in each neuromorphic core are enclosed with astrocyte circuitries, the star-shaped glial cells of the brain that facilitate self-repair by restoring the spike firing frequency of a failed neuron using a closed-loop retrograde feedback signal. Next, we introduce astrocytes in a deep learning model to achieve the required degree of tolerance to hardware faults. Finally, we use a system software to partition the astrocyte-enabled model into clusters and implement them on the proposed fault-tolerant neuromorphic design. We evaluate this design methodology using seven deep learning inference models and show that it is both area and power efficient.
注释 Accepted at ACM Computing Frontiers, 2022
邮件日期 2022年04月07日

405、基于嵌入式脉冲神经细胞自动机的模块化软机器人集体控制

  • Collective control of modular soft robots via embodied Spiking Neural Cellular Automata 时间:2022年04月05日 第一作者:Giorgia Nadizar 链接.

摘要:基于体素的软机器人(VSR)是模块化软机器人的一种形式,由多个可变形立方体(即体素)组成。因此,每个VSR都是简单代理的集合,即体素,它们必须相互配合才能产生整体VSR行为。在这种范式中,集体智能在促成协调的出现方面发挥着关键作用,因为每个体素都是独立控制的,只利用局部感官信息以及从其直接邻居(分布式或集体控制)传递的一些知识。在这项工作中,我们提出了一种受神经细胞自动机(NCA)影响并基于仿生脉冲神经网络的新型集体控制形式:体现脉冲NCA(SNCA)。我们对SNCA的不同变体进行了实验,发现它们在运动任务方面与最先进的分布式控制器具有竞争力。此外,我们的研究结果表明,相对于基线,在对不可预见的环境变化的适应性方面有显著改善,这可能是VSR物理实用性的一个决定因素。

英文摘要 Voxel-based Soft Robots (VSRs) are a form of modular soft robots, composed of several deformable cubes, i.e., voxels. Each VSR is thus an ensemble of simple agents, namely the voxels, which must cooperate to give rise to the overall VSR behavior. Within this paradigm, collective intelligence plays a key role in enabling the emerge of coordination, as each voxel is independently controlled, exploiting only the local sensory information together with some knowledge passed from its direct neighbors (distributed or collective control). In this work, we propose a novel form of collective control, influenced by Neural Cellular Automata (NCA) and based on the bio-inspired Spiking Neural Networks: the embodied Spiking NCA (SNCA). We experiment with different variants of SNCA, and find them to be competitive with the state-of-the-art distributed controllers for the task of locomotion. In addition, our findings show significant improvement with respect to the baseline in terms of adaptability to unforeseen environmental changes, which could be a determining factor for physical practicability of VSRs.
注释 Workshop on "From Cells to Societies: Collective Learning across Scales" at the International Conference on Learning Representations (Cells2Societies@ICLR)
邮件日期 2022年04月06日

404、前向信号传播学习

  • Forward Signal Propagation Learning 时间:2022年04月04日 第一作者:Adam Kohan 链接.

摘要:我们提出了一种新的学习算法,用于通过前向传递传播学习信号和更新神经网络参数,作为反向传播的替代方法。在前向信号传播学习(sigprop)中,只有用于学习和推理的前向路径,因此不存在对学习的额外结构或计算约束,例如反馈连接性、权重传输或反向传递。Sigprop仅通过正向路径实现全局监督学习。这是分层或模块并行训练的理想选择。在生物学中,这解释了没有反馈连接的神经元如何仍能接收到全局学习信号。在硬件方面,这提供了一种无需反向连接的全局监督学习方法。与反向传播和其他放松学习约束的方法相比,Sigprop在设计上与大脑和硬件中的学习模型具有更好的兼容性。我们还证明了sigprop在时间和内存方面比它们更有效。为了进一步解释sigprop的行为,我们提供了证据,证明sigprop在反向传播的上下文中提供了有用的学习信号。为了进一步支持与生物和硬件学习的相关性,我们使用sigprop来训练具有Hebbian更新的连续时间神经网络,以及训练没有替代函数的脉冲神经网络。

英文摘要 We propose a new learning algorithm for propagating a learning signal and updating neural network parameters via a forward pass, as an alternative to backpropagation. In forward signal propagation learning (sigprop), there is only the forward path for learning and inference, so there are no additional structural or computational constraints on learning, such as feedback connectivity, weight transport, or a backward pass, which exist under backpropagation. Sigprop enables global supervised learning with only a forward path. This is ideal for parallel training of layers or modules. In biology, this explains how neurons without feedback connections can still receive a global learning signal. In hardware, this provides an approach for global supervised learning without backward connectivity. Sigprop by design has better compatibility with models of learning in the brain and in hardware than backpropagation and alternative approaches to relaxing learning constraints. We also demonstrate that sigprop is more efficient in time and memory than they are. To further explain the behavior of sigprop, we provide evidence that sigprop provides useful learning signals in context to backpropagation. To further support relevance to biological and hardware learning, we use sigprop to train continuous time neural networks with Hebbian updates and train spiking neural networks without surrogate functions.
邮件日期 2022年04月06日

403、用活动正则化优化脉冲神经网络的消耗

  • Optimizing the Consumption of Spiking Neural Networks with Activity Regularization 时间:2022年04月04日 第一作者:Simon Narduzzi 链接.

摘要:对于在边缘设备上运行的神经网络模型来说,降低能耗是一个关键点。在这方面,减少在边缘硬件加速器上运行的深度神经网络(DNN)的乘法累加(MAC)操作次数将减少推理过程中的能耗。脉冲神经网络(SNN)是一种仿生技术,可以通过使用二进制激活进一步节约能源,避免在不脉冲时消耗能源。通过DNN到SNN的转换框架,可以对网络进行配置,使其在任务上具有同等的准确性,但它们的转换基于速率编码,因此突触操作可能很高。在这项工作中,我们研究了在神经网络激活图上实施稀疏性的不同技术,并比较了不同训练正则化器对优化的DNN和SNN效率的影响。

英文摘要 Reducing energy consumption is a critical point for neural network models running on edge devices. In this regard, reducing the number of multiply-accumulate (MAC) operations of Deep Neural Networks (DNNs) running on edge hardware accelerators will reduce the energy consumption during inference. Spiking Neural Networks (SNNs) are an example of bio-inspired techniques that can further save energy by using binary activations, and avoid consuming energy when not spiking. The networks can be configured for equivalent accuracy on a task through DNN-to-SNN conversion frameworks but their conversion is based on rate coding therefore the synaptic operations can be high. In this work, we look into different techniques to enforce sparsity on the neural network activation maps and compare the effect of different training regularizers on the efficiency of the optimized DNNs and SNNs.
注释 5 pages, 3 figures; accepted at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022
邮件日期 2022年04月05日

402、皮层振荡在脉冲神经网络中实现了基于采样的计算

  • Cortical oscillations implement a backbone for sampling-based computation in spiking neural networks 时间:2022年04月04日 第一作者:Agnes Korcsak-Gorzo 链接.
注释 34 pages, 9 figures Journal-ref: PLoS Comput Biol 18(3): e1009753 (2022) DOI: 10.1371/journal.pcbi.1009753
邮件日期 2022年04月05日

401、脉冲相机的光流估计

  • Optical Flow Estimation for Spiking Camera 时间:2022年04月03日 第一作者:Liwen Hu 链接.
注释 The first two authors contributed equally. Accepted to CVPR 2022
邮件日期 2022年04月05日

400、脉冲相量神经网络的深度学习

  • Deep Learning in Spiking Phasor Neural Networks 时间:2022年04月01日 第一作者:Connor Bybee 链接.

摘要:由于用于低延迟、低功耗的神经形态硬件,以及用于理解神经科学的模型,脉冲神经网络(SNN)已经吸引了深度学习社区的注意。本文介绍了脉冲相量神经网络(SPNN)。SPNN基于复数深层神经网络(DNN),通过脉冲时间表示相位。我们的模型采用脉冲计时码进行稳健计算,并且可以使用复数域形成梯度。我们在CIFAR-10上训练了SPNN,并证明其性能超过了其他定时编码的SNN,接近可比实值DNN的结果。

英文摘要 Spiking Neural Networks (SNNs) have attracted the attention of the deep learning community for use in low-latency, low-power neuromorphic hardware, as well as models for understanding neuroscience. In this paper, we introduce Spiking Phasor Neural Networks (SPNNs). SPNNs are based on complex-valued Deep Neural Networks (DNNs), representing phases by spike times. Our model computes robustly employing a spike timing code and gradients can be formed using the complex domain. We train SPNNs on CIFAR-10, and demonstrate that the performance exceeds that of other timing coded SNNs, approaching results with comparable real-valued DNNs.
注释 10 pages, 5 figures, work presented at Intel Neuromorphic Community Fall 2019 workshop in Graz, Austria and the UC Berkeley Center for Computational Biology Retreat 2019
邮件日期 2022年04月04日

399、SIT:一种用于脉冲神经网络的仿生非线性神经元

  • SIT: A Bionic and Non-Linear Neuron for Spiking Neural Network 时间:2022年04月01日 第一作者:Cheng Jin 链接.
邮件日期 2022年04月04日

398、神经形态硬件中的时间编码脉冲傅里叶变换

  • Time-coded Spiking Fourier Transform in Neuromorphic Hardware 时间:2022年03月31日 第一作者:Javier L'opez-R 链接.
注释 Accepted version on IEEE Transactions on Computers (early access). Added copyright notice DOI: 10.1109/TC.2022.3162708
邮件日期 2022年04月01日

397、SIT:一种用于脉冲神经网络的仿生非线性神经元

  • SIT: A Bionic and Non-Linear Neuron for Spiking Neural Network 时间:2022年03月30日 第一作者:Cheng Jin 链接.

摘要:脉冲神经网络(SNN)因其处理时间信息的能力和低功耗而引起了研究人员的兴趣。然而,目前最先进的方法限制了它们的生物学合理性和性能,因为它们的神经元通常是建立在简单的泄漏积分和火灾(LIF)模型上的。由于高度的动态复杂性,现代神经元模型很少在SNN实践中实现。在这项研究中,我们采用了相平面分析(PPA)技术,这是一种在神经动力学领域经常使用的技术,来整合一个最近的神经元模型,即Izhikevich神经元。根据神经科学进展中的发现,伊兹克维奇神经元模型在生物学上是合理的,同时保持了与LIF神经元相当的计算成本。通过利用所采用的PPA,我们已经完成了将用改进的Izhikevich模型构建的神经元应用于SNN实践,称为标准化的Izhikevich紧张(SIT)神经元。在性能方面,我们评估了自建LIF和SIT组成SNN中图像分类任务的建议技术,即静态MNIST、时尚MNIST、CIFAR-10数据集和神经形态N-MNIST、CIFAR10-DVS和DVS128手势数据集上的混合神经网络(HNN)。实验结果表明,所提出的方法在几乎所有测试数据集上表现出更真实的生物学行为的同时,达到了相当的准确性,证明了这种新策略在弥合神经动力学和SNN实践之间的差距方面的有效性。

英文摘要 Spiking Neural Networks (SNNs) have piqued researchers' interest because of their capacity to process temporal information and low power consumption. However, current state-of-the-art methods limited their biological plausibility and performance because their neurons are generally built on the simple Leaky-Integrate-and-Fire (LIF) model. Due to the high level of dynamic complexity, modern neuron models have seldom been implemented in SNN practice. In this study, we adopt the Phase Plane Analysis (PPA) technique, a technique often utilized in neurodynamics field, to integrate a recent neuron model, namely, the Izhikevich neuron. Based on the findings in the advancement of neuroscience, the Izhikevich neuron model can be biologically plausible while maintaining comparable computational cost with LIF neurons. By utilizing the adopted PPA, we have accomplished putting neurons built with the modified Izhikevich model into SNN practice, dubbed as the Standardized Izhikevich Tonic (SIT) neuron. For performance, we evaluate the suggested technique for image classification tasks in self-built LIF-and-SIT-consisted SNNs, named Hybrid Neural Network (HNN) on static MNIST, Fashion-MNIST, CIFAR-10 datasets and neuromorphic N-MNIST, CIFAR10-DVS, and DVS128 Gesture datasets. The experimental results indicate that the suggested method achieves comparable accuracy while exhibiting more biologically realistic behaviors on nearly all test datasets, demonstrating the efficiency of this novel strategy in bridging the gap between neurodynamics and SNN practice.
邮件日期 2022年03月31日

396、基于事件的电位辅助脉冲神经网络视频重建

  • Event-based Video Reconstruction via Potential-assisted Spiking Neural Network 时间:2022年03月30日 第一作者:Lin Zhu 链接.
注释 Accepted by CVPR2022
邮件日期 2022年03月31日

395、神经生物学中的时空模式:未来人工智能综述

  • Spatiotemporal Patterns in Neurobiology: An Overview for Future Artificial Intelligence 时间:2022年03月29日 第一作者:Sean Knight 链接.

摘要:近年来,人们对开发模型和工具以解决脑组织中发现的复杂连接模式越来越感兴趣。具体来说,这是因为需要了解这些网络结构在多个时空尺度上如何产生涌现属性。我们认为,计算模型是阐明多尺度时空域上由复杂网络连接的异质神经元相互作用可能产生的功能的关键工具。在这里,我们回顾了几类模型,包括脉冲神经元、具有短期可塑性(STP)的整合和激发神经元、基于电导的具有短期可塑性(STP)的整合和激发模型,以及使用简单示例的群体密度神经场(PDNF)模型,重点介绍了神经科学的应用,同时也为人工智能提供了一些潜在的未来研究方向。这些计算方法使我们能够从实验和理论上探索改变潜在机制对产生的网络功能的影响。因此,我们希望这些研究将为人工智能算法的未来发展提供信息,并帮助验证我们对基于动物或人类实验的大脑过程的理解。

英文摘要 In recent years, there has been increasing interest in developing models and tools to address the complex patterns of connectivity found in brain tissue. Specifically, this is due to a need to understand how emergent properties emerge from these network structures at multiple spatiotemporal scales. We argue that computational models are key tools for elucidating the possible functionalities that can emerge from interactions of heterogeneous neurons connected by complex networks on multi-scale temporal and spatial domains. Here we review several classes of models including spiking neurons, integrate and fire neurons with short term plasticity (STP), conductance based integrate-and-fire models with STP, and population density neural field (PDNF) models using simple examples with emphasis on neuroscience applications while also providing some potential future research directions for AI. These computational approaches allow us to explore the impact of changing underlying mechanisms on resulting network function both experimentally as well as theoretically. Thus we hope these studies will inform future developments in artificial intelligence algorithms as well as help validate our understanding of brain processes based on experiments in animals or humans.
注释 8 pages
邮件日期 2022年03月30日

394、具有脉冲神经元的脑激励多层感知器

  • Brain-inspired Multilayer Perceptron with Spiking Neurons 时间:2022年03月28日 第一作者:Wenshuo Li 链接.

摘要:近年来,多层感知器(MLP)成为计算机视觉领域的研究热点。在没有归纳偏差的情况下,MLP在特征提取方面表现良好,并取得了惊人的效果。然而,由于其结构简单,性能在很大程度上取决于局部特征和通信机制。为了进一步提高MLP的性能,我们引入了来自大脑启发神经网络的信息通信机制。脉冲神经网络(SNN)是最著名的脑激励神经网络,在处理稀疏数据方面取得了巨大的成功。SNN中的泄漏整合与激发(LIF)神经元用于在不同的时间步之间进行通信。在本文中,我们将LIF神经元的机制纳入MLP模型中,以在没有额外失败的情况下获得更好的精度。我们提出了一种全精度的LIF操作来实现面片之间的通信,包括不同方向的水平LIF和垂直LIF。我们还建议使用组LIF来提取更好的局部特征。有了LIF模块,我们的SNN-MLP模型在ImageNet数据集上达到了81.9%、83.3%和83.5%的顶级精度,分别只有4.4G、8.5G和15.2G次,这是我们所知的最先进的结果。

英文摘要 Recently, Multilayer Perceptron (MLP) becomes the hotspot in the field of computer vision tasks. Without inductive bias, MLPs perform well on feature extraction and achieve amazing results. However, due to the simplicity of their structures, the performance highly depends on the local features communication machenism. To further improve the performance of MLP, we introduce information communication mechanisms from brain-inspired neural networks. Spiking Neural Network (SNN) is the most famous brain-inspired neural network, and achieve great success on dealing with sparse data. Leaky Integrate and Fire (LIF) neurons in SNNs are used to communicate between different time steps. In this paper, we incorporate the machanism of LIF neurons into the MLP models, to achieve better accuracy without extra FLOPs. We propose a full-precision LIF operation to communicate between patches, including horizontal LIF and vertical LIF in different directions. We also propose to use group LIF to extract better local features. With LIF modules, our SNN-MLP model achieves 81.9%, 83.3% and 83.5% top-1 accuracy on ImageNet dataset with only 4.4G, 8.5G and 15.2G FLOPs, respectively, which are state-of-the-art results as far as we know.
注释 This paper is accepted by CVPR 2022
邮件日期 2022年03月29日

393、一种基于神经流形的棘波神经网络用于增强皮质内脑-机接口数据

  • A Spiking Neural Network based on Neural Manifold for Augmenting Intracortical Brain-Computer Interface Data 时间:2022年03月26日 第一作者:Shengjie Zheng 链接.

摘要:脑-机接口(BCI),将大脑中的神经信号转换为指令,以控制外部设备。然而,获得足够的训练数据既困难又有限。随着先进的机器学习方法的出现,脑-机接口的能力得到了前所未有的增强,然而,这些方法需要大量的训练数据,因此需要对有限的可用数据进行数据扩充。这里,我们使用脉冲神经网络(SNN)作为数据生成器。它被誉为下一代神经网络,并被认为是面向一般人工智能的算法之一,因为它借用了生物逻辑神经元的神经信息处理。我们使用SNN生成可生物解释的神经棘波信息,并符合原始神经数据中的内在模式。实验表明,该模型可以直接合成新的脉冲序列,从而提高BCI解码器的泛化能力。脉冲神经模型的输入和输出都是脉冲信息,这是一种大脑启发的智能方法,将来可以更好地与BCI集成。

英文摘要 Brain-computer interfaces (BCIs), transform neural signals in the brain into in-structions to control external devices. However, obtaining sufficient training data is difficult as well as limited. With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before, however, these methods require a large amount of data for training and thus require data augmentation of the limited data available. Here, we use spiking neural networks (SNN) as data generators. It is touted as the next-generation neu-ral network and is considered as one of the algorithms oriented to general artifi-cial intelligence because it borrows the neural information processing from bio-logical neurons. We use the SNN to generate neural spike information that is bio-interpretable and conforms to the intrinsic patterns in the original neural data. Ex-periments show that the model can directly synthesize new spike trains, which in turn improves the generalization ability of the BCI decoder. Both the input and output of the spiking neural model are spike information, which is a brain-inspired intelligence approach that can be better integrated with BCI in the future.
注释 12pages , 9 figures
邮件日期 2022年04月12日

392、用人工神经网络发现生理神经元霍奇金-赫胥黎模型的动力学特征

  • Discovering dynamical features of Hodgkin-Huxley-type model of physiological neuron using artificial neural network 时间:2022年03月26日 第一作者:Pavel V. Kuptsov 链接.

摘要:我们考虑Hodgkin-Huxley型模型,它是一个具有两个快变量和一个慢变量的刚性常微分方程系统。对于所考虑的参数范围,模型的原始版本具有不稳定不动点和振荡吸引子,该吸引子表现出从爆破到脉冲动力学的分叉。此外,还考虑了一种改进的情况,即双稳态发生时,参数空间中出现了一个不动点变得稳定并与爆破吸引子共存的区域。对于这两个系统,我们创建了能够重现其动态的人工神经网络。所创建的网络作为循环映射运行,并根据在一定范围内随机参数值采样的轨迹切割进行训练。虽然网络只在振荡轨迹切割上训练,但它也能发现所考虑系统的不动点。其位置甚至特征值都与初始常微分方程的不动点非常吻合。对于双稳态模型,这意味着只在一个解决方案的早午餐上训练的网络恢复另一个早午餐,而在训练期间看不到它。在我们看来,这些结果能够触发复杂动力学重建和发现新方法的发展。从实用的角度来看,用神经网络再现动力学可以被认为是一种用于当代并行硬件和软件的数值建模的替代方法。

英文摘要 We consider Hodgkin-Huxley-type model that is a stiff ODE system with two fast and one slow variables. For the parameter ranges under consideration the original version of the model has unstable fixed point and the oscillating attractor that demonstrates bifurcation from bursting to spiking dynamics. Also a modified version is considered where the bistability occurs such that an area in the parameter space appears where the fixed point becomes stable and coexists with the bursting attractor. For these two systems we create artificial neural networks that are able to reproduce their dynamics. The created networks operate as recurrent maps and are trained on trajectory cuts sampled at random parameter values within a certain range. Although the networks are trained only on oscillatory trajectory cuts, it also discover the fixed point of the considered systems. The position and even the eigenvalues coincide very well with the fixed point of the initial ODEs. For the bistable model it means that the network being trained only on one brunch of the solutions recovers another brunch without seeing it during the training. These results, as we see it, are able to trigger the development of new approaches to complex dynamics reconstruction and discovering. From the practical point of view reproducing dynamics with the neural network can be considered as a sort of alternative method of numerical modeling intended for use with contemporary parallel hard- and software.
注释 17 pages, 12 figures, 2 tables
邮件日期 2022年03月29日

391、脉冲神经流二进制算法

  • Spiking Neural Streaming Binary Arithmetic 时间:2022年03月23日 第一作者:James B. Aimone 链接.

摘要:布尔函数和二进制算术运算是标准计算范式的核心。因此,计算领域的许多进展都集中在如何提高这些操作的效率,以及探索它们可以计算什么。为了充分利用新计算范式的优势,重要的是考虑它们提供了哪些独特的计算方法。然而,对于任何特殊用途的协处理器,布尔函数和二进制算术运算非常有用,尤其是通过在设备上预处理和后处理数据来避免不必要的I/O打开和关闭协处理器。这尤其适用于脉冲神经形态结构,其中这些基本操作不是基本的低级操作。相反,这些功能需要具体实现。在这里,我们讨论了一种有利的流式二进制编码方法以及一些设计用于精确计算基本布尔运算和二进制运算的电路的含义。

英文摘要 Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.
注释 Accepted and presented at the 2021 International Conference on Rebooting Computing (ICRC) Report-no: SAND2021-13472 C
邮件日期 2022年03月25日

390、稀疏主动卷积脉冲神经网络的高效硬件加速

  • Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking Neural Networks 时间:2022年03月23日 第一作者:Jan Sommer 链接.

摘要:脉冲神经网络(SNN)在基于事件的情况下进行计算,以实现比标准神经网络更高效的计算。在SNN中,神经元输出(即激活)不是用实值激活编码的,而是用二元脉冲序列编码的。与传统神经网络相比,使用SNN的动机植根于SNN的特殊计算方面,尤其是神经输出激活的高度稀疏性。传统卷积神经网络(CNN)结构成熟,其特点是处理单元(PE)的大型空间阵列在激活稀疏性面前仍然高度未充分利用。我们提出了一种新的架构,该架构针对具有高度激活稀疏性的卷积SNN(CSNN)的处理进行了优化。在我们的体系结构中,主要策略是使用较少但利用率较高的PEs。用于执行卷积的PE阵列仅与内核大小一样大,只要有峰值需要处理,所有PE都可以处于活动状态。通过将特征映射(即激活)压缩到队列中,然后逐峰处理,可以确保这种恒定的峰值流。这种压缩是在运行时使用专用电路执行的,从而实现了自定时调度。这允许处理时间与峰值数量直接成比例。采用一种称为记忆交错的新型记忆组织方案,使用多个小型并行片上ram高效地存储和检索单个神经元的膜电位。每个RAM都通过硬接线连接到其PE,减少了开关电路,并允许RAM位于各自PE附近。我们在FPGA上实现了所提出的架构,与其他实现相比,实现了显著的加速,同时需要更少的硬件资源并保持较低的能耗。

英文摘要 Spiking Neural Networks (SNNs) compute in an event-based matter to achieve a more efficient computation than standard Neural Networks. In SNNs, neuronal outputs (i.e. activations) are not encoded with real-valued activations but with sequences of binary spikes. The motivation of using SNNs over conventional neural networks is rooted in the special computational aspects of SNNs, especially the very high degree of sparsity of neural output activations. Well established architectures for conventional Convolutional Neural Networks (CNNs) feature large spatial arrays of Processing Elements (PEs) that remain highly underutilized in the face of activation sparsity. We propose a novel architecture that is optimized for the processing of Convolutional SNNs (CSNNs) that feature a high degree of activation sparsity. In our architecture, the main strategy is to use less but highly utilized PEs. The PE array used to perform the convolution is only as large as the kernel size, allowing all PEs to be active as long as there are spikes to process. This constant flow of spikes is ensured by compressing the feature maps (i.e. the activations) into queues that can then be processed spike by spike. This compression is performed in run-time using dedicated circuitry, leading to a self-timed scheduling. This allows the processing time to scale directly with the number of spikes. A novel memory organization scheme called memory interlacing is used to efficiently store and retrieve the membrane potentials of the individual neurons using multiple small parallel on-chip RAMs. Each RAM is hardwired to its PE, reducing switching circuitry and allowing RAMs to be located in close proximity to the respective PE. We implemented the proposed architecture on an FPGA and achieved a significant speedup compared to other implementations while needing less hardware resources and maintaining a lower energy consumption.
注释 12 pages, 12 figures, 5 tables, submitted to CODES 2022
邮件日期 2022年03月24日

389、有机对数域整合突触

  • Organic log-domain integrator synapse 时间:2022年03月23日 第一作者:Mohammad Javad Mirshojaeian Hosseini 链接.

摘要:突触在记忆、学习和认知中起着关键作用。它们的主要功能包括将突触前的电压脉冲转化为突触后的电流,以及缩放输入信号。有人提出了几种以大脑为灵感的结构来模拟生物突触的行为。虽然这些研究有助于探索神经系统的特性,但制造具有生物相容性和柔性电路的挑战仍然存在,这些电路具有生物上合理的时间常数和可调增益。这里展示了一种物理上灵活的有机对数域积分突触电路来应对这一挑战。特别是,该电路使用电活性有机材料制造,提供灵活性和生物相容性,以及生物学上合理的时间常数(学习神经代码和编码时空模式的关键)。使用10NF突触电容器,弯曲前和弯曲期间的时间常数分别达到126 ms和221 ms。在弯曲之前和弯曲期间对柔性突触回路进行了表征,然后研究了加权电压、突触电容和突触前信号的差异对时间常数的影响。

英文摘要 Synapses play a critical role in memory, learning, and cognition. Their main functions include converting pre-synaptic voltage spikes to post-synaptic currents, as well as scaling the input signal. Several brain-inspired architectures have been proposed to emulate the behavior of biological synapses. While these are useful to explore the properties of nervous systems, the challenge of making biocompatible and flexible circuits with biologically plausible time constants and tunable gain remains. Here, a physically flexible organic log-domain integrator synaptic circuit is shown to address this challenge. In particular, the circuit is fabricated using organic-based materials that are electrically active, offer flexibility and biocompatibility, as well as time constants (critical in learning neural codes and encoding spatiotemporal patterns) that are biologically plausible. Using a 10 nF synaptic capacitor, the time constant reached 126 ms and 221 ms before and during bending, respectively. The flexible synaptic circuit is characterized before and during bending, followed by studies on the effects of weighting voltage, synaptic capacitance, and disparity in pre-synaptic signals on the time constant.
注释 Accepted by Advanced Electronic Materials (18 pages, 17 figures) DOI: 10.1002/aelm.202100724
邮件日期 2022年03月24日

388、电压依赖性突触可塑性(VDSP):基于神经元膜电位的无监督概率Hebbian可塑性规则

  • Voltage-Dependent Synaptic Plasticity (VDSP): Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential 时间:2022年03月21日 第一作者:Nikhil Garg 链接.

摘要:本研究提出了电压依赖性突触可塑性(VDSP),这是一种新的大脑启发的无监督局部学习规则,用于在神经形态硬件上在线实现Hebb的可塑性机制。建议的VDSP学习规则仅更新突触后神经元棘突上的突触电导,这将标准的棘突时间依赖性可塑性(STDP)更新次数减少两倍。这种更新依赖于突触前神经元的膜电位,该电位作为神经元实现的一部分随时可用,因此不需要额外的存储内存。此外,这种更新还可以调节突触重量,防止重复刺激时重量爆炸或消失。通过严格的数学分析得出VDSP和STDP之间的等价关系。为了验证VDSP的系统级性能,我们训练了一个用于手写数字识别的单层脉冲神经网络(SNN)。我们报告了MNIST数据集上100个输出神经元网络的准确度为85.01$\pm$0.76%(平均$\pm$S.D.)。当扩展网络大小时(400个输出神经元为89.93$\pm$0.41%,500个神经元为90.56$\pm$0.27),性能得到改善,这验证了所提出的学习规则适用于大规模计算机视觉任务。有趣的是,学习规则比STDP更好地适应输入信号的频率,并且不需要手动调整超参数。

英文摘要 This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb's plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike-timing-dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 $ \pm $ 0.76% (Mean $ \pm $ S.D.) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 $ \pm $ 0.41% for 400 output neurons, 90.56 $ \pm $ 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for large-scale computer vision tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.
邮件日期 2022年03月22日

387、一种基于加速神经形态硬件的可扩展建模方法

  • A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware 时间:2022年03月21日 第一作者:Eric M"uller 链接.

摘要:神经形态系统为扩大计算研究的探索空间提供了机会。然而,将效率和可用性结合起来往往是一个挑战。这项工作介绍了BrainScaleS-2系统的软件方面,这是一种基于物理建模的混合加速神经形态硬件架构。我们将介绍BrainScaleS-2操作系统的关键方面:实验工作流、API分层、软件设计和平台操作。我们展示用例来讨论和导出软件的需求,并展示实现。重点在于新的系统和软件功能,如多隔间神经元、硬件在环训练的快速重新配置、嵌入式处理器的应用、非脉冲操作模式、交互式平台访问和可持续的硬件/软件协同开发。最后,我们讨论了硬件扩展、系统可用性和效率方面的进一步发展。

英文摘要 Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability and efficiency.
邮件日期 2022年03月22日

386、脉冲神经网络的最新进展和新前沿

  • Recent Advances and New Frontiers in Spiking Neural Networks 时间:2022年03月12日 第一作者:Duzhen Zhang 链接.

摘要:近年来,脉冲神经网络(spiking neural networks,SNN)由于其丰富的时空动力学、各种编码方案以及与神经形态硬件自然匹配的事件驱动特性,在脑启发智能领域受到了广泛关注。随着SNN的发展,脑启发智能(brain-inspired intelligence,brain-inspired intelligence,简称brain-inspired intelligence,简称SNN)这一以人工通用智能为目标的新兴研究领域正变得越来越热门。在本文中,我们回顾了SNN的最新进展,并从四个主要研究主题讨论了SNN的新前沿,包括基本元素(即脉冲神经元模型、编码方法和拓扑结构)、数据集、优化算法以及软硬件框架。我们希望我们的调查能够帮助研究人员更好地理解SNN,并激发新的工作来推进这一领域。

英文摘要 In recent years, spiking neural networks (SNNs) have received extensive attention in the field of brain-inspired intelligence due to their rich spatially-temporal dynamics, various coding schemes, and event-driven characteristics that naturally fit the neuromorphic hardware. With the development of SNNs, brain-inspired intelligence, an emerging research field inspired by brain science achievements and aiming at artificial general intelligence, is becoming hot. In this paper, we review the recent advances and discuss the new frontiers in SNNs from four major research topics, including essential elements (i.e., spiking neuron models, encoding methods, and topology structures), datasets, optimization algorithms, and software and hardware frameworks. We hope our survey can help researchers understand SNNs better and inspire new works to advance this field.
注释 Under review
邮件日期 2022年04月15日

385、基于地球移动距离的暹罗脉冲神经网络的监督训练

  • Supervised Training of Siamese Spiking Neural Networks with Earth's Mover Distance 时间:2022年02月20日 第一作者:Mateusz Pabian 链接.

摘要:本研究将高度通用的暹罗神经网络模型应用于事件数据域。我们引入了一个有监督的训练框架,用于利用脉冲神经网络(SNN)优化脉冲序列之间的地震动距离(EMD)。我们在MNIST数据集转换为脉冲域的图像上用新的转换方案训练该模型。通过测量不同数据集编码类型的分类器性能,评估输入图像的暹罗嵌入质量。该模型取得了与现有基于SNN的方法类似的性能(F1得分高达0.9386),同时仅使用约15%的隐层神经元对每个示例进行分类。此外,没有使用稀疏神经代码的模型比稀疏的模型慢约45%。这些特性使该模型适用于低能耗和低预测延迟应用。

英文摘要 This study adapts the highly-versatile siamese neural network model to the event data domain. We introduce a supervised training framework for optimizing Earth's Mover Distance (EMD) between spike trains with spiking neural networks (SNN). We train this model on images of the MNIST dataset converted into spiking domain with novel conversion schemes. The quality of the siamese embeddings of input images was evaluated by measuring the classifier performance for different dataset coding types. The models achieved performance similar to existing SNN-based approaches (F1-score of up to 0.9386) while using only about 15% of hidden layer neurons to classify each example. Furthermore, models which did not employ a sparse neural code were about 45% slower than their sparse counterparts. These properties make the model suitable for low energy consumption and low prediction latency applications.
注释 Manuscript accepted for presentation at 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
邮件日期 2022年03月25日

384、具有时空压缩和突触卷积阻滞的超低潜伏期脉冲神经网络

  • Ultra-low Latency Spiking Neural Networks with Spatio-Temporal Compression and Synaptic Convolutional Block 时间:2022年03月18日 第一作者:Changqing Xu 链接.

摘要:脉冲神经网络(Spiking neural networks,SNN)是一种受大脑启发的模型,具有时空信息处理能力强、功耗低、生物合理性高等特点。有效的时空特征使其适用于事件流分类。然而,神经形态数据集,如N-MNIST、CIFAR10-DVS、DVS128手势,需要将单个事件聚合为具有更高时间分辨率的帧,用于事件流分类,这会导致较高的训练和推理延迟。在这项工作中,我们提出了一种时空压缩方法,将单个事件聚合为突触电流的几个时间步,以减少训练和推理延迟。为了在高压缩比下保持SNN的准确性,我们还提出了一种突触卷积块来平衡相邻时间步之间的剧烈变化。并引入了具有可学习膜时间常数的多阈值漏积分与点火(LIF),以提高其信息处理能力。我们在神经形态N-MNIST、CIFAR10-DVS、DVS128手势数据集上评估了所提出的事件流分类任务方法。实验结果表明,我们提出的方法在几乎所有数据集上都优于最先进的精度,使用的时间步长更少。

英文摘要 Spiking neural networks (SNNs), as one of the brain-inspired models, has spatio-temporal information processing capability, low power feature, and high biological plausibility. The effective spatio-temporal feature makes it suitable for event streams classification. However, neuromorphic datasets, such as N-MNIST, CIFAR10-DVS, DVS128-gesture, need to aggregate individual events into frames with a new higher temporal resolution for event stream classification, which causes high training and inference latency. In this work, we proposed a spatio-temporal compression method to aggregate individual events into a few time steps of synaptic current to reduce the training and inference latency. To keep the accuracy of SNNs under high compression ratios, we also proposed a synaptic convolutional block to balance the dramatic change between adjacent time steps. And multi-threshold Leaky Integrate-and-Fire (LIF) with learnable membrane time constant is introduced to increase its information processing capability. We evaluate the proposed method for event streams classification tasks on neuromorphic N-MNIST, CIFAR10-DVS, DVS128 gesture datasets. The experiment results show that our proposed method outperforms the state-of-the-art accuracy on nearly all datasets, using fewer time steps.
邮件日期 2022年03月21日

383、使用推文和视频创建多媒体摘要

  • Creating Multimedia Summaries Using Tweets and Videos 时间:2022年03月16日 第一作者:Anietie Andy 链接.

摘要:当总统辩论或电视节目等热门电视节目播出时,人们会实时提供评论。在本文中,我们提出了一种简单而有效的方法,将社交媒体评论和视频结合起来,创建电视事件的多媒体摘要。我们的方法基于事件中涉及人员的大量提及来识别这些事件中的场景,并自动从视频中选择推文和帧,这些视频发生在讨论和显示被讨论人员的时间段。

英文摘要 While popular televised events such as presidential debates or TV shows are airing, people provide commentary on them in real-time. In this paper, we propose a simple yet effective approach to combine social media commentary and videos to create a multimedia summary of televised events. Our approach identifies scenes from these events based on spikes of mentions of people involved in the event and automatically selects tweets and frames from the videos that occur during the time period of the spike that talk about and show the people being discussed.
注释 8 pages, 3 figures, 7 tables
邮件日期 2022年03月18日

382、快速精确递归神经网络的脉冲激励秩编码

  • Spike-inspired Rank Coding for Fast and Accurate Recurrent Neural Networks 时间:2022年03月16日 第一作者:Alan Jeffares 链接.
注释 Spotlight paper at ICLR 2022
邮件日期 2022年03月17日

381、Skydiver:利用时空工作负载平衡的脉冲神经网络加速器

  • Skydiver: A Spiking Neural Network Accelerator Exploiting Spatio-Temporal Workload Balance 时间:2022年03月14日 第一作者:Qinyu Chen 链接.

摘要:脉冲神经网络(SNN)是人工神经网络(ANN)的一种很有前途的替代方法,因为它具有更真实的大脑启发计算模型。SNN具有随时间变化的稀疏神经元放电,即时空稀疏性;因此,它们有助于实现节能硬件推断。然而,利用硬件中SNN的时空稀疏性会导致不可预测和不平衡的工作负载,降低能效。在这项工作中,我们提出了一种基于FPGA的卷积SNN加速器Skydiver,它利用了时空负载平衡。我们提出了一种近似比例关系构造(APRC)方法和一种通道平衡工作负载调度(CBWS)方法,以将硬件工作负载平衡率提高到90%以上。Skydiver在Xilinx XC7Z045 FPGA上实现,并在图像分割和MNIST分类任务上进行了验证。结果表明,这两项任务的吞吐量分别提高了1.4倍和1.2倍。Skydiver在分类任务中获得了22.6 KFPS的吞吐量和42.4 uJ/图像预测能量,准确率为98.5%。

英文摘要 Spiking Neural Networks (SNNs) are developed as a promising alternative to Artificial Neural networks (ANNs) due to their more realistic brain-inspired computing models. SNNs have sparse neuron firing over time, i.e., spatio-temporal sparsity; thus, they are useful to enable energy-efficient hardware inference. However, exploiting spatio-temporal sparsity of SNNs in hardware leads to unpredictable and unbalanced workloads, degrading the energy efficiency. In this work, we propose an FPGA-based convolutional SNN accelerator called Skydiver that exploits spatio-temporal workload balance. We propose the Approximate Proportional Relation Construction (APRC) method that can predict the relative workload channel-wisely and a Channel-Balanced Workload Schedule (CBWS) method to increase the hardware workload balance ratio to over 90%. Skydiver was implemented on a Xilinx XC7Z045 FPGA and verified on image segmentation and MNIST classification tasks. Results show improved throughput by 1.4X and 1.2X for the two tasks. Skydiver achieved 22.6 KFPS throughput, and 42.4 uJ/Image prediction energy on the classification task with 98.5% accuracy.
注释 Accepted to be published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022 DOI: 10.1109/TCAD.2022.3158834
邮件日期 2022年03月16日

380、脉冲神经网络集成电路:趋势和未来方向综述

  • Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions 时间:2022年03月14日 第一作者:Arindam Basu 链接.

摘要:本文回顾了脉冲神经网络(SNN)集成电路的设计,分析了混合信号核、全数字核和大规模多核设计的发展趋势。最近报道的SNN集成电路分为三大类:(a)具有专用于脉冲路由的NOC的大规模多核设计,(b)数字单核设计和(c)混合信号单核设计。最后,我们完成了论文,并对未来的研究方向进行了展望。

英文摘要 In this paper, we reviewed Spiking neural network (SNN) integrated circuit designs and analyzed the trends among mixed-signal cores, fully digital cores and large-scale, multi-core designs. Recently reported SNN integrated circuits are compared under three broad categories: (a) Large-scale multi-core designs that have dedicated NOC for spike routing, (b) digital single-core designs and (c) mixed-signal single-core designs. Finally, we finish the paper with some directions for future progress.
邮件日期 2022年03月15日

379、神经体系结构寻找脉冲神经网络

  • Neural Architecture Search for Spiking Neural Networks 时间:2022年03月12日 第一作者:Youngeun Kim 链接.
邮件日期 2022年03月15日

378、SoftSNN:软错误下脉冲神经网络加速器的低成本容错

  • SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network Accelerators under Soft Errors 时间:2022年03月12日 第一作者:Rachmad Vidya Wicaksana Putra 链接.
注释 To appear at the 59th IEEE/ACM Design Automation Conference (DAC), July 2022, San Francisco, CA, USA
邮件日期 2022年03月15日

377、SNN中的整体可塑性和网络适应性

  • Ensemble plasticity and network adaptability in SNNs 时间:2022年03月11日 第一作者:Mahima Milinda Alwis Weerasinghe 链接.

摘要:由于基于离散事件(即脉冲)的计算,人工脉冲神经网络(ASNN)有望提高信息处理效率。一些机器学习(ML)应用程序使用生物启发的可塑性机制作为无监督学习技术,以提高ASNN的鲁棒性,同时保持效率。脉冲时间依赖性可塑性(STDP)和内在可塑性(IP)(即动态脉冲阈值适应)是两种这样的机制,它们被结合起来形成了一种集成学习方法。然而,目前尚不清楚这种整体学习应该如何基于扣球活动进行调节。此外,之前的研究已经尝试在STDP后进行基于阈值的突触修剪,以提高ASNN的推理效率,同时牺牲ASNN的性能。然而,这种类型的结构适应,采用个体的重量机制,不考虑脉冲活动修剪,这是一个更好的表示输入刺激。我们设想,基于可塑性的穗调控和基于穗的修剪将导致ASSN在低资源情况下表现更好。本文介绍了一种基于熵和网络激活的集成学习方法,该方法与专门使用脉冲活动的脉冲率神经元修剪技术相结合。使用两个脑电图(EEG)数据集作为分类实验的输入,使用一次通过学习训练的三层前馈ASNN。在学习过程中,我们观察到神经元根据脉冲率聚集成一系列簇。研究发现,修剪低棘波率神经元簇会导致泛化程度的增加或可预测的性能下降。

英文摘要 Artificial Spiking Neural Networks (ASNNs) promise greater information processing efficiency because of discrete event-based (i.e., spike) computation. Several Machine Learning (ML) applications use biologically inspired plasticity mechanisms as unsupervised learning techniques to increase the robustness of ASNNs while preserving efficiency. Spike Time Dependent Plasticity (STDP) and Intrinsic Plasticity (IP) (i.e., dynamic spiking threshold adaptation) are two such mechanisms that have been combined to form an ensemble learning method. However, it is not clear how this ensemble learning should be regulated based on spiking activity. Moreover, previous studies have attempted threshold based synaptic pruning following STDP, to increase inference efficiency at the cost of performance in ASNNs. However, this type of structural adaptation, that employs individual weight mechanisms, does not consider spiking activity for pruning which is a better representation of input stimuli. We envisaged that plasticity-based spike-regulation and spike-based pruning will result in ASSNs that perform better in low resource situations. In this paper, a novel ensemble learning method based on entropy and network activation is introduced, which is amalgamated with a spike-rate neuron pruning technique, operated exclusively using spiking activity. Two electroencephalography (EEG) datasets are used as the input for classification experiments with a three-layer feed forward ASNN trained using one-pass learning. During the learning process, we observed neurons assembling into a hierarchy of clusters based on spiking rate. It was discovered that pruning lower spike-rate neuron clusters resulted in increased generalization or a predictable decline in performance.
注释 19 pages, 12 figures ACM-class: I.2.6
邮件日期 2022年03月15日

376、细胞自动机可以通过诱导轨迹相位共存来对数据进行分类

  • Cellular automata can classify data by inducing trajectory phase coexistence 时间:2022年03月10日 第一作者:Stephen Whitelam 链接.

摘要:我们证明了细胞自动机可以通过诱导一种形式的动态相位共存来对数据进行分类。我们使用蒙特卡罗方法来搜索一般的二维确定性自动机,该自动机根据活动、从图像开始的轨迹中发生的状态变化数量对图像进行分类。当自动机的深度是一个可训练的参数时,搜索方案会根据初始条件识别自动机,自动机生成一组动态轨迹,显示高或低活动。这种性质的自动机表现为非线性激活函数,其输出实际上是二进制的,类似于脉冲神经元的出现版本。我们的工作将机器学习和水库计算与概念上类似于磁铁和眼镜等物理系统的现象联系起来。

英文摘要 We show that cellular automata can classify data by inducing a form of dynamical phase coexistence. We use Monte Carlo methods to search for general two-dimensional deterministic automata that classify images on the basis of activity, the number of state changes that occur in a trajectory initiated from the image. When the depth of the automaton is a trainable parameter, the search scheme identifies automata that generate a population of dynamical trajectories displaying high or low activity, depending on initial conditions. Automata of this nature behave as nonlinear activation functions with an output that is effectively binary, resembling an emergent version of a spiking neuron. Our work connects machine learning and reservoir computing to phenomena conceptually similar to those seen in physical systems such as magnets and glasses.
邮件日期 2022年03月11日

375、SoftSNN:软错误下脉冲神经网络加速器的低成本容错

  • SoftSNN: Low-Cost Fault Tolerance for Spiking Neural Network Accelerators under Soft Errors 时间:2022年03月10日 第一作者:Rachmad Vidya Wicaksana Putra 链接.

摘要:专门的硬件加速器被设计和使用,以最大限度地提高脉冲神经网络(SNN)的性能效率。然而,这种加速器容易受到瞬态故障(即软错误)的影响,这些故障是由高能粒子撞击引起的,并在硬件层表现为位翻转。这些错误可能会改变SNN加速器计算引擎中的权重值和神经元操作,从而导致不正确的输出和精度下降。然而,对于SNN,计算引擎中软错误的影响以及相应的缓解技术尚未得到彻底研究。一个潜在的解决方案是使用冗余执行(重新执行)来确保正确的输出,但它会导致巨大的延迟和能源开销。为此,我们提出了SoftSNN,这是一种新的方法,可以在不重新执行的情况下减轻SNN加速器的权重寄存器(突触)和神经元中的软错误,从而在低延迟和能量开销的情况下保持准确性。我们的软SNN方法采用了以下关键步骤:(1)分析软错误下的SNN特征,以识别错误权重和神经元操作,这是识别错误SNN行为所必需的;(2) 一种限制和保护技术,利用这种分析,通过限制权重值和保护神经元免受错误操作,提高SNN的容错能力;(3)为神经硬件加速器设计轻量级硬件增强,以有效支持所提出的技术。实验结果表明,对于高故障率的900个神经元网络,我们的SoftSNN保持了3%以下的精度下降,同时与重新执行技术相比,延迟和能量分别减少了3倍和2.3倍。

英文摘要 Specialized hardware accelerators have been designed and employed to maximize the performance efficiency of Spiking Neural Networks (SNNs). However, such accelerators are vulnerable to transient faults (i.e., soft errors), which occur due to high-energy particle strikes, and manifest as bit flips at the hardware layer. These errors can change the weight values and neuron operations in the compute engine of SNN accelerators, thereby leading to incorrect outputs and accuracy degradation. However, the impact of soft errors in the compute engine and the respective mitigation techniques have not been thoroughly studied yet for SNNs. A potential solution is employing redundant executions (re-execution) for ensuring correct outputs, but it leads to huge latency and energy overheads. Toward this, we propose SoftSNN, a novel methodology to mitigate soft errors in the weight registers (synapses) and neurons of SNN accelerators without re-execution, thereby maintaining the accuracy with low latency and energy overheads. Our SoftSNN methodology employs the following key steps: (1) analyzing the SNN characteristics under soft errors to identify faulty weights and neuron operations, which are required for recognizing faulty SNN behavior; (2) a Bound-and-Protect technique that leverages this analysis to improve the SNN fault tolerance by bounding the weight values and protecting the neurons from faulty operations; and (3) devising lightweight hardware enhancements for the neural hardware accelerator to efficiently support the proposed technique. The experimental results show that, for a 900-neuron network with even a high fault rate, our SoftSNN maintains the accuracy degradation below 3%, while reducing latency and energy by up to 3x and 2.3x respectively, as compared to the re-execution technique.
注释 To appear at the 59th IEEE/ACM Design Automation Conference (DAC), July 2022, San Francisco, CA, USA
邮件日期 2022年03月11日

374、一种具有无监督学习的全记忆脉冲神经网络

  • A Fully Memristive Spiking Neural Network with Unsupervised Learning 时间:2022年03月10日 第一作者:Peng Zhou 链接.
邮件日期 2022年03月11日

373、SPICEprop:通过记忆脉冲神经网络反向传播错误

  • SPICEprop: Backpropagating Errors Through Memristive Spiking Neural Networks 时间:2022年03月10日 第一作者:Peng Zhou 链接.
邮件日期 2022年03月11日

372、SPICEprop:通过记忆脉冲神经网络反向传播错误

  • SPICEprop: Backpropagating Errors Through Memristive Spiking Neural Networks 时间:2022年03月08日 第一作者:Peng Zhou 链接.
邮件日期 2022年03月09日

371、基于STDP的脉冲神经网络监督学习算法

  • An STDP-Based Supervised Learning Algorithm for Spiking Neural Networks 时间:2022年03月07日 第一作者:Zhanhao Hu 链接.

摘要:与基于速率的人工神经网络相比,脉冲神经网络(SNN)为大脑提供了一个更具生物合理性的模型。但他们如何进行监督学习仍然是个谜。受Bengio等人最近工作的启发,我们提出了一种基于脉冲时间依赖可塑性(STDP)的监督学习算法,用于由漏积分和激发(LIF)神经元组成的分层SNN。为突触前神经元设计了一个时间窗口,只有该窗口中的脉冲参与STDP更新过程。模型在MNIST数据集上进行训练。分类精度接近由标准反向传播算法训练的具有类似结构的多层感知器(MLP)。

英文摘要 Compared with rate-based artificial neural networks, Spiking Neural Networks (SNN) provide a more biological plausible model for the brain. But how they perform supervised learning remains elusive. Inspired by recent works of Bengio et al., we propose a supervised learning algorithm based on Spike-Timing Dependent Plasticity (STDP) for a hierarchical SNN consisting of Leaky Integrate-and-fire (LIF) neurons. A time window is designed for the presynaptic neuron and only the spikes in this window take part in the STDP updating process. The model is trained on the MNIST dataset. The classification accuracy approach that of a Multilayer Perceptron (MLP) with similar architecture trained by the standard back-propagation algorithm.
邮件日期 2022年03月08日

370、基于事件的电位辅助脉冲神经网络视频重建

  • Event-based Video Reconstruction via Potential-assisted Spiking Neural Network 时间:2022年03月03日 第一作者:Lin Zhu 链接.
注释 Accepted at CVPR2022
邮件日期 2022年03月07日

369、重新思考标准化和剩余块在脉冲神经网络中的作用

  • Rethinking the role of normalization and residual blocks for spiking neural networks 时间:2022年03月03日 第一作者:Shin-ichi Ikegawa 链接.

摘要:受生物启发的脉冲神经网络(SNN)被广泛用于实现超低功耗。然而,由于隐藏层中的脉冲神经元过度放电,深层SNN不容易训练。为了解决这个问题,我们提出了一种新颖但简单的标准化技术,称为突触后电位标准化。这种归一化从标准归一化中删除减法项,并使用第二个原始矩而不是方差作为除法项。通过对突触后电位进行简单的标准化,可以控制棘波放电,使训练能够进行分配。实验结果表明,使用我们的归一化处理的SNN优于使用其他归一化处理的其他模型。此外,通过预激活剩余块,该模型可以训练超过100层,而无需其他专用于SNN的特殊技术。

英文摘要 Biologically inspired spiking neural networks (SNNs) are widely used to realize ultralow-power energy consumption. However, deep SNNs are not easy to train due to the excessive firing of spiking neurons in the hidden layers. To tackle this problem, we propose a novel but simple normalization technique called postsynaptic potential normalization. This normalization removes the subtraction term from the standard normalization and uses the second raw moment instead of the variance as the division term. The spike firing can be controlled, enabling the training to proceed appropriating, by conducting this simple normalization to the postsynaptic potential. The experimental results show that SNNs with our normalization outperformed other models using other normalizations. Furthermore, through the pre-activation residual blocks, the proposed model can train with more than 100 layers without other special techniques dedicated to SNNs.
注释 14 pages, 9 figures, 3 tables
邮件日期 2022年03月04日

368、用于噪声图像识别的随机量子神经网络

  • Random Quantum Neural Networks (RQNN) for Noisy Image Recognition 时间:2022年03月03日 第一作者:Debanjan Konar 链接.

摘要:经典的随机神经网络(RNN)在决策、信号处理和图像识别任务中得到了有效的应用。然而,它们的实现仅限于确定性数字系统,这些系统输出概率分布来代替随机脉冲信号的随机行为。我们引入了一类新的有监督随机量子神经网络(RQNN),该网络具有鲁棒性训练策略,可以更好地利用脉冲RNN的随机性。受量子信息理论和大脑神经元信息编码的时空随机脉冲特性的启发,提出的RQNN采用了具有叠加态和振幅编码特征的混合经典量子算法。我们已经广泛验证了我们提出的RQNN模型,通过PennyLane量子模拟器使用有限数量的\emph{qubits}依赖混合经典量子算法。在MNIST、FashionMNIST和KMNIST数据集上的实验表明,所提出的RQNN模型的平均分类准确率为94.9%。此外,实验结果表明,与经典神经网络(RNN)、经典脉冲神经网络(SNN)和经典卷积神经网络(AlexNet)相比,所提出的RQNN在噪声环境下的有效性和弹性,以及增强的图像分类精度。此外,RQNN可以处理噪声,这对各种应用非常有用,包括NISQ设备中的计算机视觉。PyTorch密码(https://github.com/darthsimpus/RQN)可在GitHub上获取,以复制本手稿中报告的结果。

英文摘要 Classical Random Neural Networks (RNNs) have demonstrated effective applications in decision making, signal processing, and image recognition tasks. However, their implementation has been limited to deterministic digital systems that output probability distributions in lieu of stochastic behaviors of random spiking signals. We introduce the novel class of supervised Random Quantum Neural Networks (RQNNs) with a robust training strategy to better exploit the random nature of the spiking RNN. The proposed RQNN employs hybrid classical-quantum algorithms with superposition state and amplitude encoding features, inspired by quantum information theory and the brain's spatial-temporal stochastic spiking property of neuron information encoding. We have extensively validated our proposed RQNN model, relying on hybrid classical-quantum algorithms via the PennyLane Quantum simulator with a limited number of \emph{qubits}. Experiments on the MNIST, FashionMNIST, and KMNIST datasets demonstrate that the proposed RQNN model achieves an average classification accuracy of $94.9\%$. Additionally, the experimental findings illustrate the proposed RQNN's effectiveness and resilience in noisy settings, with enhanced image classification accuracy when compared to the classical counterparts (RNNs), classical Spiking Neural Networks (SNNs), and the classical convolutional neural network (AlexNet). Furthermore, the RQNN can deal with noise, which is useful for various applications, including computer vision in NISQ devices. The PyTorch code (https://github.com/darthsimpus/RQN) is made available on GitHub to reproduce the results reported in this manuscript.
注释 This article is submitted to Nature Machine Intelligence journal for review and possible publications
邮件日期 2022年03月04日

367、一种具有无监督学习的全记忆脉冲神经网络

  • A Fully Memristive Spiking Neural Network with Unsupervised Learning 时间:2022年03月02日 第一作者:Peng Zhou 链接.

摘要:我们提出了一个由物理可实现的记忆神经元和记忆突触组成的全记忆脉冲神经网络(MSNN),以实现无监督的脉冲时间依赖性可塑性(STDP)学习规则。这个系统是完全记忆的,因为神经元和突触的动力学都可以通过记忆器来实现。该神经元采用SPICE级记忆积分和激发(MIF)模型实现,该模型由实现不同去极化、超极化和复极电压波形所需的最少电路元件组成。所提出的MSNN独特地实现了STDP学习,它利用记忆性突触中的累积权重变化来实现记忆性突触的学习。记忆性突触中的电压波形变化源于训练过程中突触前和突触后的脉冲电压信号。研究了两种MSNN结构:1)生物学上合理的记忆检索系统,2)多类分类系统。我们的电路仿真结果通过复制生物记忆检索机制验证了MSNN的无监督学习效率,并在大规模判别式MSNN的4模式识别问题中实现了97.5%的准确率。

英文摘要 We present a fully memristive spiking neural network (MSNN) consisting of physically-realizable memristive neurons and memristive synapses to implement an unsupervised Spiking Time Dependent Plasticity (STDP) learning rule. The system is fully memristive in that both neuronal and synaptic dynamics can be realized by using memristors. The neuron is implemented using the SPICE-level memristive integrate-and-fire (MIF) model, which consists of a minimal number of circuit elements necessary to achieve distinct depolarization, hyperpolarization, and repolarization voltage waveforms. The proposed MSNN uniquely implements STDP learning by using cumulative weight changes in memristive synapses from the voltage waveform changes across the synapses, which arise from the presynaptic and postsynaptic spiking voltage signals during the training process. Two types of MSNN architectures are investigated: 1) a biologically plausible memory retrieval system, and 2) a multi-class classification system. Our circuit simulation results verify the MSNN's unsupervised learning efficacy by replicating biological memory retrieval mechanisms, and achieving 97.5% accuracy in a 4-pattern recognition problem in a large scale discriminative MSNN.
邮件日期 2022年03月04日

366、SPICEprop:通过记忆脉冲神经网络反向传播错误

  • SPICEprop: Backpropagating Errors Through Memristive Spiking Neural Networks 时间:2022年03月02日 第一作者:Peng Zhou 链接.

摘要:我们提出了一种完全忆阻性脉冲神经网络(MSNN),由使用时间反向传播(BPTT)学习规则训练的新型忆阻神经元组成。梯度下降直接应用于记忆集成与激发(MIF)神经元,该神经元使用模拟SPICE电路模型设计,产生明显的去极化、超极化和复极电压波形。突触重量由BPTT利用MIF神经元模型的膜电位进行训练,并可在记忆交叉杆上进行处理。MIF神经元模型的自然脉冲动态和完全可微性,消除了脉冲神经网络文献中普遍存在的梯度近似的需要。尽管直接在SPICE电路模型上进行训练会增加复杂性,但我们在MNIST测试数据集和时装MNIST测试数据集上的准确率分别达到97.58%和75.26%,是所有完全MSNN中准确率最高的。

英文摘要 We present a fully memristive spiking neural network (MSNN) consisting of novel memristive neurons trained using the backpropagation through time (BPTT) learning rule. Gradient descent is applied directly to the memristive integrated-and-fire (MIF) neuron designed using analog SPICE circuit models, which generates distinct depolarization, hyperpolarization, and repolarization voltage waveforms. Synaptic weights are trained by BPTT using the membrane potential of the MIF neuron model and can be processed on memristive crossbars. The natural spiking dynamics of the MIF neuron model and fully differentiable, eliminating the need for gradient approximations that are prevalent in the spiking neural network literature. Despite the added complexity of training directly on SPICE circuit models, we achieve 97.58% accuracy on the MNIST testing dataset and 75.26% on the Fashion-MNIST testing dataset, the highest accuracies among all fully MSNNs.
邮件日期 2022年03月04日

365、重新思考预培训作为从ANN到SNN的桥梁

  • Rethinking Pretraining as a Bridge from ANNs to SNNs 时间:2022年03月02日 第一作者:Yihan Lin 链接.

摘要:脉冲神经网络(Spiking neural networks,SNN)是一种典型的脑激励模型,具有丰富的神经元动力学特性、多样的编码方案和低功耗特性。如何获得高精度的模型一直是SNN领域的主要挑战。目前,有两种主流方法,即通过将经过良好训练的人工神经网络(ANN)转换为其对应的SNN来获得转换后的SNN,或直接训练SNN。然而,转换后的SNN的推理时间太长,而SNN训练通常非常昂贵且效率低下。在这项工作中,通过结合两种不同训练方法的概念,借助预训练技术和基于BP的深度SNN训练机制,提出了一种新的SNN训练范式。我们认为,提出的范例是训练SNN的更有效途径。管道包括用于静态数据传输任务的管道和用于动态数据传输任务的管道。SOTA结果是在大规模事件驱动数据集ES ImageNet中获得的。对于训练加速,我们使用ImageNet-1K上的1/10训练时间和ES ImageNet上的2/5训练时间,实现了与类似LIF SNN相同(或更高)的最佳精度,并为新数据集ES-UCF101提供了时间精度基准。这些实验结果揭示了ANN和SNN之间参数函数的相似性,也展示了该SNN训练管道的各种潜在应用。

英文摘要 Spiking neural networks (SNNs) are known as a typical kind of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained Artificial Neural Network (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipeS for static data transfer tasks and pipeD for dynamic data transfer tasks. SOTA results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar LIF-SNNs using 1/10 training time on ImageNet-1K and 2/5 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate the various potential applications of this SNN training pipeline.
注释 8 pages, 4 figures
邮件日期 2022年03月03日

364、利用基于时间的神经元提高脉冲神经网络的精度

  • Improving Spiking Neural Network Accuracy Using Time-based Neurons 时间:2022年03月02日 第一作者:Hanseok Kim 链接.
注释 Accepted in ISCAS 2022
邮件日期 2022年03月03日

363、神经形态硬件中的时间编码脉冲傅里叶变换

  • Time-coded Spiking Fourier Transform in Neuromorphic Hardware 时间:2022年02月25日 第一作者:Javier L'opez-R 链接.

摘要:经过几十年的不断优化计算系统,摩尔定律正在走向终结。然而,人们对快速高效的处理系统的需求越来越大,这些系统可以处理大量数据流,同时减少系统占用。神经形态计算通过创建随时间推移与二进制事件通信的分散架构来满足这一需求。尽管在过去几年中快速增长,但需要新的算法来利用这种新兴计算范式的潜力,并刺激高级神经形态芯片的设计。在这项工作中,我们提出了一种基于时间的脉冲神经网络,它在数学上等价于傅里叶变换。我们在神经形态芯片Loihi中实现了该网络,并用汽车调频连续波雷达在五种不同的实际场景中进行了实验。实验结果验证了该算法的有效性,我们希望它们能促进adhoc神经形态芯片的设计,从而提高最先进的数字信号处理器的效率,并鼓励神经形态计算用于信号处理的研究。

英文摘要 After several decades of continuously optimizing computing systems, the Moore's law is reaching itsend. However, there is an increasing demand for fast and efficient processing systems that can handlelarge streams of data while decreasing system footprints. Neuromorphic computing answers thisneed by creating decentralized architectures that communicate with binary events over time. Despiteits rapid growth in the last few years, novel algorithms are needed that can leverage the potential ofthis emerging computing paradigm and can stimulate the design of advanced neuromorphic chips.In this work, we propose a time-based spiking neural network that is mathematically equivalent tothe Fourier transform. We implemented the network in the neuromorphic chip Loihi and conductedexperiments on five different real scenarios with an automotive frequency modulated continuouswave radar. Experimental results validate the algorithm, and we hope they prompt the design of adhoc neuromorphic chips that can improve the efficiency of state-of-the-art digital signal processorsand encourage research on neuromorphic computing for signal processing.
注释 Submitted to IEEE Transactions on Computers. Revised version
邮件日期 2022年02月28日

362、生物纠错码产生容错神经网络

  • Biological error correction codes generate fault-tolerant neural networks 时间:2022年02月25日 第一作者:Alex 链接.

摘要:在深度学习中,容错计算是否可行一直是一个悬而未决的问题:仅使用不可靠的神经元能否实现任意可靠的计算?在哺乳动物的大脑皮层中,人们观察到被称为网格码的模拟纠错码可以保护状态免受神经脉冲噪声的影响,但它们在信息处理中的作用尚不清楚。在这里,我们使用这些生物代码来表明,如果每个神经元的不完美性低于一个尖锐的阈值,则可以实现一个通用的容错神经网络,我们发现该阈值在数量级上与生物神经元中观察到的噪声相一致。从故障到容错神经计算的急剧相变的发现为理解人工智能和神经科学中的噪声模拟系统开辟了一条道路。

英文摘要 It has been an open question in deep learning if fault-tolerant computation is possible: can arbitrarily reliable computation be achieved using only unreliable neurons? In the mammalian cortex, analog error correction codes known as grid codes have been observed to protect states against neural spiking noise, but their role in information processing is unclear. Here, we use these biological codes to show that a universal fault-tolerant neural network can be achieved if the faultiness of each neuron lies below a sharp threshold, which we find coincides in order of magnitude with noise observed in biological neurons. The discovery of a sharp phase transition from faulty to fault-tolerant neural computation opens a path towards understanding noisy analog systems in artificial intelligence and neuroscience.
邮件日期 2022年02月28日

361、用脉冲神经网络进化学习强化学习任务

  • Evolving-to-Learn Reinforcement Learning Tasks with Spiking Neural Networks 时间:2022年02月24日 第一作者:J. Lu 链接.

摘要:受自然神经系统启发,突触可塑性规则被用于训练具有局部信息的脉冲神经网络,使其适合在神经形态硬件上进行在线学习。然而,当实现这些规则来学习不同的新任务时,它们通常需要在依赖于任务的微调方面进行大量工作。本文旨在通过采用进化算法,为手头的任务进化出合适的突触可塑性规则,使这一过程变得更容易。更具体地说,我们提供了一组不同的局部信号、一组数学算子和一个全局奖励信号,然后笛卡尔遗传规划过程从这些组件中找到一个最优学习规则。使用这种方法,我们找到了成功解决XOR和cart-pole任务的学习规则,并发现了优于文献中基线规则的新学习规则。

英文摘要 Inspired by the natural nervous system, synaptic plasticity rules are applied to train spiking neural networks with local information, making them suitable for online learning on neuromorphic hardware. However, when such rules are implemented to learn different new tasks, they usually require a significant amount of work on task-dependent fine-tuning. This paper aims to make this process easier by employing an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand. More specifically, we provide a set of various local signals, a set of mathematical operators, and a global reward signal, after which a Cartesian genetic programming process finds an optimal learning rule from these components. Using this approach, we find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.
邮件日期 2022年02月28日

360、基于梯度重加权的脉冲神经网络时间有效训练

  • Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting 时间:2022年02月24日 第一作者:Shikuang Deng 链接.

摘要:近年来,脑激励的脉冲神经元网络(SNN)因其事件驱动和高效节能的特点引起了广泛的研究兴趣。然而,由于深层SNN激活函数的不可微性,很难有效地训练深层SNN,这使得传统人工神经网络(ANN)中通常使用的梯度下降方法失效。虽然采用替代梯度(SG)形式上允许损失的反向传播,但离散脉冲机制实际上将SNN的损失情况与ANN的损失情况区分开来,使替代梯度方法无法达到与ANN相当的精度。在本文中,我们首先分析了为什么当前使用替代梯度的直接训练方法会导致SNN泛化性差。然后,我们引入时间有效训练(TET)方法来补偿梯度下降过程中的动量损失,从而使训练过程收敛到更平坦的极小值,具有更好的泛化性。同时,我们证明了TET提高了SNN的时间可伸缩性,并诱导了一种时间可继承的加速训练。我们的方法在所有报告的主流数据集(包括CIFAR-10/100和ImageNet)上始终优于SOTA。值得注意的是,在DVS-CIFAR10上,我们获得了83$%%$top-1精度,与现有技术水平相比,提高了10$%%$。代码可从\url获取{https://github.com/Gus-Lab/temporal_efficient_training}.

英文摘要 Recently, brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest because of their event-driven and energy-efficient characteristics. Still, it is difficult to efficiently train deep SNNs due to the non-differentiability of its activation function, which disables the typically used gradient descent approaches for traditional artificial neural networks (ANNs). Although the adoption of surrogate gradient (SG) formally allows for the back-propagation of losses, the discrete spiking mechanism actually differentiates the loss landscape of SNNs from that of ANNs, failing the surrogate gradient methods to achieve comparable accuracy as for ANNs. In this paper, we first analyze why the current direct training approach with surrogate gradient results in SNNs with poor generalizability. Then we introduce the temporal efficient training (TET) approach to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability. Meanwhile, we demonstrate that TET improves the temporal scalability of SNN and induces a temporal inheritable training for acceleration. Our method consistently outperforms the SOTA on all reported mainstream datasets, including CIFAR-10/100 and ImageNet. Remarkably on DVS-CIFAR10, we obtained 83$\%$ top-1 accuracy, over 10$\%$ improvement compared to existing state of the art. Codes are available at \url{https://github.com/Gus-Lab/temporal_efficient_training}.
注释 Published as a conference paper at ICLR 2022
邮件日期 2022年02月25日

359、BioLCNet:奖励调制的局部连接脉冲神经网络

  • BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks 时间:2022年02月24日 第一作者:Hafez Ghaemi 链接.
注释 9 pages, 6 figures ACM-class: I.2.6; I.5.1
邮件日期 2022年02月25日

358、脉冲神经元的自然梯度学习

  • Natural-gradient learning for spiking neurons 时间:2022年02月23日 第一作者:Elena Kreutzer 链接.
注释 Joint senior authorship: Walter M. Senn and Mihai A. Petrovici
邮件日期 2022年02月25日

357、用于时空特征学习的脉冲时间相关可塑性网络的新视角

  • A New Look at Spike-Timing-Dependent Plasticity Networks for Spatio-Temporal Feature Learning 时间:2022年02月22日 第一作者:Ali Safa 链接.
邮件日期 2022年02月23日

356、早产儿高效节能呼吸异常检测

  • Energy-Efficient Respiratory Anomaly Detection in Premature Newborn Infants 时间:2022年02月21日 第一作者:Ankita Paul 链接.

摘要:准确监测早产儿的呼吸频率对于根据需要启动医疗干预至关重要。有线技术可能对患者具有侵入性和侵扰性。我们提出了一种针对早产儿的支持深度学习的可穿戴式监测系统,该系统使用从佩戴在婴儿身上的无创可穿戴式Bellypatch无线采集的信号来预测呼吸停止。我们提出了一个五阶段的设计流程,包括数据收集和标记、特征缩放、带有超参数调整的模型选择、模型训练和验证、模型测试和部署。所使用的模型是一个一维卷积神经网络(1DCNN)结构,具有1个卷积层、1个池层和3个完全连接层,实现了97.15%的精度。为了解决可穿戴处理的能量限制,探索了几种量化技术,并对其性能和能耗进行了分析。我们提出了一种新的基于脉冲神经网络(SNN)的呼吸分类解决方案,可以在事件驱动的神经形态硬件上实现。我们提出了一种将基线1DCNN的模拟操作转换为其峰值等效值的方法。我们使用转换后的SNN的参数进行设计空间探索,以生成具有不同精度和能量足迹的推理解。我们选择了一个解决方案,与基线1DCNN模型相比,该解决方案以18倍的能量实现了93.33%的精度。此外,提出的SNN解决方案实现了类似的精度,但能耗减少了4倍。

英文摘要 Precise monitoring of respiratory rate in premature infants is essential to initiate medical interventions as required. Wired technologies can be invasive and obtrusive to the patients. We propose a Deep Learning enabled wearable monitoring system for premature newborn infants, where respiratory cessation is predicted using signals that are collected wirelessly from a non-invasive wearable Bellypatch put on infant's body. We propose a five-stage design pipeline involving data collection and labeling, feature scaling, model selection with hyperparameter tuning, model training and validation, model testing and deployment. The model used is a 1-D Convolutional Neural Network (1DCNN) architecture with 1 convolutional layer, 1 pooling layer and 3 fully-connected layers, achieving 97.15% accuracy. To address energy limitations of wearable processing, several quantization techniques are explored and their performance and energy consumption are analyzed. We propose a novel Spiking-Neural-Network(SNN) based respiratory classification solution, which can be implemented on event-driven neuromorphic hardware. We propose an approach to convert the analog operations of our baseline 1DCNN to their spiking equivalent. We perform a design-space exploration using the parameters of the converted SNN to generate inference solutions having different accuracy and energy footprints. We select a solution that achieves 93.33% accuracy with 18 times lower energy compared with baseline 1DCNN model. Additionally the proposed SNN solution achieves similar accuracy but with 4 times less energy.
邮件日期 2022年02月23日

355、在神经形态结构上实现脉冲神经网络:综述

  • Implementing Spiking Neural Networks on Neuromorphic Architectures: A Review 时间:2022年02月17日 第一作者:Phu Khanh Huynh 链接.

摘要:最近,工业界和学术界都提出了几种不同的神经形态系统来执行使用脉冲神经网络(SNN)设计的机器学习应用程序。随着设计和技术前沿的日益复杂,为此类系统编程以接纳和执行机器学习应用程序变得越来越具有挑战性。此外,神经形态系统需要保证实时性能,消耗较低的能量,并提供对逻辑和内存故障的容忍度。因此,显然需要系统软件框架,能够在当前和新兴的神经形态系统上实现机器学习应用,同时解决性能、能量和可靠性问题。在这里,我们将对基于平台的设计和软硬件协同设计提出的此类框架进行全面概述。我们强调了神经形态计算系统软件技术领域未来面临的挑战和机遇。

英文摘要 Recently, both industry and academia have proposed several different neuromorphic systems to execute machine learning applications that are designed using Spiking Neural Networks (SNNs). With the growing complexity on design and technology fronts, programming such systems to admit and execute a machine learning application is becoming increasingly challenging. Additionally, neuromorphic systems are required to guarantee real-time performance, consume lower energy, and provide tolerance to logic and memory failures. Consequently, there is a clear need for system software frameworks that can implement machine learning applications on current and emerging neuromorphic systems, and simultaneously address performance, energy, and reliability. Here, we provide a comprehensive overview of such frameworks proposed for both, platform-based design and hardware-software co-design. We highlight challenges and opportunities that the future holds in the area of system software technology for neuromorphic computing.
邮件日期 2022年02月21日

354、通过解决脉冲神经网络中的退化问题来推进深度剩余学习

  • Advancing Deep Residual Learning by Solving the Crux of Degradation in Spiking Neural Networks 时间:2022年02月17日 第一作者:Yifan Hu 链接.
注释 It is an older version of arXiv:2112.08954 and was submitted by mistake
邮件日期 2022年02月18日

353、学习探测飞行中的人:一个基于仿生事件的无人机视觉系统

  • Learning to Detect People on the Fly: A Bio-inspired Event-based Visual System for Drones 时间:2022年02月16日 第一作者:Ali Safa 链接.

摘要:我们首次证明,配备有脉冲时间依赖性可塑性(STDP)学习功能的生物可启动脉冲神经网络(SNN)可以持续学习使用视网膜启发的、基于事件的相机数据来检测飞行中行走的人。我们的管道工作如下。首先,向卷积SNNSTDP系统显示从飞行无人机捕捉行走的人的短序列事件数据(<2分钟),该系统还从卷积读数(形成半监督系统)接收教师脉冲信号。然后,停止STDP自适应,并根据测试序列评估学习系统。我们进行了一些实验来研究我们系统中关键机制的影响,并将我们的精确回忆性能与使用RGB或基于事件的相机帧的常规训练CNN进行比较。

英文摘要 We demonstrate for the first time that a biologicallyplausible spiking neural network (SNN) equipped with Spike- Timing-Dependent Plasticity (STDP) learning can continuously learn to detect walking people on the fly using retina-inspired, event-based camera data. Our pipeline works as follows. First, a short sequence of event data (< 2 minutes), capturing a walking human from a flying drone, is shown to a convolutional SNNSTDP system which also receives teacher spiking signals from a convolutional readout (forming a semi-supervised system). Then, STDP adaptation is stopped and the learned system is assessed on testing sequences. We conduct several experiments to study the effect of key mechanisms in our system and we compare our precision-recall performance to conventionally-trained CNNs working with either RGB or event-based camera frames.
邮件日期 2022年02月17日

352、AutoSNN:走向节能的脉冲神经网络

  • AutoSNN: Towards Energy-Efficient Spiking Neural Networks 时间:2022年02月16日 第一作者:Byunggook Na 链接.
邮件日期 2022年02月17日

351、失重脉冲神经网络的时间延迟记忆

  • Memory via Temporal Delays in weightless Spiking Neural Network 时间:2022年02月15日 第一作者:Hananel Hazan 链接.

摘要:神经科学界的一个普遍观点是,记忆编码在神经元之间的连接强度中。这种认知导致人工神经网络模型将连接权重作为调节学习的关键变量。在本文中,我们提出了一个失重脉冲神经网络的原型,可以执行一个简单的分类任务。该网络中的记忆存储在神经元之间的时间,而不是连接的强度,并使用Hebbian Spike timing Dependent Plastics(STDP)进行训练,它调节连接的延迟。

英文摘要 A common view in the neuroscience community is that memory is encoded in the connection strength between neurons. This perception led artificial neural network models to focus on connection weights as the key variables to modulate learning. In this paper, we present a prototype for weightless spiking neural networks that can perform a simple classification task. The memory in this network is stored in the timing between neurons, rather than the strength of the connection, and is trained using a Hebbian Spike Timing Dependent Plasticity (STDP), which modulates the delays of the connection.
邮件日期 2022年02月16日

350、量化脉冲神经网络中的局部极小值导航

  • Navigating Local Minima in Quantized Spiking Neural Networks 时间:2022年02月15日 第一作者:Jason K. Eshraghian 链接.

摘要:脉冲和量化神经网络(NNs)对于高效实现深度学习(DL)算法变得极其重要。然而,由于应用硬阈值时没有梯度信号,这些网络在使用错误反向传播进行训练时面临挑战。克服这一问题的广为接受的技巧是通过使用有偏梯度估计器:替代梯度近似脉冲神经网络(SNN)中的阈值,而直通估计器(STE)完全绕过量化神经网络(QNN)中的阈值。虽然噪声梯度反馈在简单的有监督学习任务中实现了合理的性能,但人们认为,这种噪声增加了在损失环境中寻找最优解的难度,尤其是在优化的后期阶段。通过在训练期间定期提高学习率(LR),我们期望网络能够导航未探索的解决方案空间,否则由于局部极小值、障碍或平坦表面,这些空间将难以到达。本文对应用于量化SNN(QSNN)的余弦退火LR调度与权重无关的自适应矩估计进行了系统评估。我们对这项技术在三个数据集的高精度和4位量化SNN上进行了严格的实证评估,在更复杂的数据集上展示了(接近)最先进的性能。我们的源代码可通过以下链接获得:https://github.com/jeshraghian/QSNNs.

英文摘要 Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms. However, these networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds. The broadly accepted trick to overcoming this is through the use of biased gradient estimators: surrogate gradients which approximate thresholding in Spiking Neural Networks (SNNs), and Straight-Through Estimators (STEs), which completely bypass thresholding in Quantized Neural Networks (QNNs). While noisy gradient feedback has enabled reasonable performance on simple supervised learning tasks, it is thought that such noise increases the difficulty of finding optima in loss landscapes, especially during the later stages of optimization. By periodically boosting the Learning Rate (LR) during training, we expect the network can navigate unexplored solution spaces that would otherwise be difficult to reach due to local minima, barriers, or flat surfaces. This paper presents a systematic evaluation of a cosine-annealed LR schedule coupled with weight-independent adaptive moment estimation as applied to Quantized SNNs (QSNNs). We provide a rigorous empirical evaluation of this technique on high precision and 4-bit quantized SNNs across three datasets, demonstrating (close to) state-of-the-art performance on the more complex datasets. Our source code is available at this link: https://github.com/jeshraghian/QSNNs.
邮件日期 2022年02月16日

349、动力系统的递归神经网络:常微分方程、集体运动和水文建模的应用

  • Recurrent Neural Networks for Dynamical Systems: Applications to Ordinary Differential Equations, Collective Motion, and Hydrological Modeling 时间:2022年02月14日 第一作者:Yonggi Park 链接.

摘要:求解时空动力系统的经典方法包括统计方法,如自回归积分移动平均法,该方法假定系统先前输出之间存在线性和平稳关系。线性方法的开发和实现相对简单,但它们通常不会捕获数据中的非线性关系。因此,人工神经网络(ANN)在分析和预测动力系统方面受到了研究人员的关注。递归神经网络(RNN)源于前馈神经网络,它利用内部存储器处理可变长度的输入序列。这使得RNN可以应用于寻找时空动力系统中各种问题的解决方案。因此,在本文中,我们利用RNN来处理一些与动力系统相关的特定问题。具体来说,我们分析了应用于三项任务的RNN的性能:重建具有公式误差的系统的正确Lorenz解,重建损坏的集体运动轨迹,以及预测具有脉冲的流量时间序列,代表三个领域,即常微分方程,集体运动和水文建模。我们在每项任务中对RNN进行独特的训练和测试,以证明RNN在重建和预测动力系统动力学方面的广泛适用性。

英文摘要 Classical methods of solving spatiotemporal dynamical systems include statistical approaches such as autoregressive integrated moving average, which assume linear and stationary relationships between systems' previous outputs. Development and implementation of linear methods are relatively simple, but they often do not capture non-linear relationships in the data. Thus, artificial neural networks (ANNs) are receiving attention from researchers in analyzing and forecasting dynamical systems. Recurrent neural networks (RNN), derived from feed-forward ANNs, use internal memory to process variable-length sequences of inputs. This allows RNNs to applicable for finding solutions for a vast variety of problems in spatiotemporal dynamical systems. Thus, in this paper, we utilize RNNs to treat some specific issues associated with dynamical systems. Specifically, we analyze the performance of RNNs applied to three tasks: reconstruction of correct Lorenz solutions for a system with a formulation error, reconstruction of corrupted collective motion trajectories, and forecasting of streamflow time series possessing spikes, representing three fields, namely, ordinary differential equations, collective motion, and hydrological modeling, respectively. We train and test RNNs uniquely in each task to demonstrate the broad applicability of RNNs in reconstruction and forecasting the dynamics of dynamical systems.
注释 15 pages, 9 figures, submitted into "Chaos: An Interdisciplinary Journal of Nonlinear Science" MSC-class: 37M99 ACM-class: I.2.1
邮件日期 2022年02月16日

348、带系统级局部自动增益控制的脉冲耳蜗

  • Spiking Cochlea with System-level Local Automatic Gain Control 时间:2022年02月14日 第一作者:Ilya Kiselev 链接.

摘要:由于晶体管失配和模型的复杂性,将局部自动增益控制(AGC)电路纳入硅耳蜗设计一直具有挑战性。为了解决这个问题,我们提出了一种替代的系统级算法,通过测量单个通道的输出脉冲活动,在硅脉冲耳蜗中实现特定于通道的AGC。信道的带通滤波器增益动态地适应输入振幅,以便平均输出脉冲速率保持在定义的范围内。由于这种AGC机制只需要计数和加法运算,因此在未来的设计中可以以较低的硬件成本实现。我们评估了本地AGC算法对输入信号在32 dB输入范围内变化的分类任务的影响。在语音与噪声分类任务中测试了两种接收耳蜗脉冲特征的分类器类型。启用AGC时,逻辑回归分类器的准确度平均提高6%,相对提高40.8%。深度神经网络分类器在AGC情况下表现出类似的改进,与逻辑回归分类器的最佳精度91%相比,其平均精度更高,达到96%。

英文摘要 Including local automatic gain control (AGC) circuitry into a silicon cochlea design has been challenging because of transistor mismatch and model complexity. To address this, we present an alternative system-level algorithm that implements channel-specific AGC in a silicon spiking cochlea by measuring the output spike activity of individual channels. The bandpass filter gain of a channel is adapted dynamically to the input amplitude so that the average output spike rate stays within a defined range. Because this AGC mechanism only needs counting and adding operations, it can be implemented at low hardware cost in a future design. We evaluate the impact of the local AGC algorithm on a classification task where the input signal varies over 32 dB input range. Two classifier types receiving cochlea spike features were tested on a speech versus noise classification task. The logistic regression classifier achieves an average of 6% improvement and 40.8% relative improvement in accuracy when the AGC is enabled. The deep neural network classifier shows a similar improvement for the AGC case and achieves a higher mean accuracy of 96% compared to the best accuracy of 91% from the logistic regression classifier.
注释 Accepted for publication at the IEEE Transactions on Circuits and Systems I - Regular Papers, 2022 DOI: 10.1109/TCSI.2022.3150165
邮件日期 2022年02月15日

347、Motif拓扑和奖励学习改进的脉冲神经网络用于高效的多感官整合

  • Motif-topology and Reward-learning improved Spiking Neural Network for Efficient Multi-sensory Integration 时间:2022年02月11日 第一作者:Shuncheng Jia 链接.

摘要:在人工神经网络(ANN)和脉冲神经网络(SNN)中,网络结构和学习原理是形成复杂函数的关键。SNN被认为是新一代人工神经网络,它融合了比ANN更多的生物学特性,包括动态脉冲神经元、功能特定的体系结构和高效的学习范式。在本文中,我们提出了一种母题拓扑和奖励学习改进的SNN(MR-SNN),以实现高效的多感官整合。MR-SNN包含13种类型的3节点基序拓扑,这些基序拓扑首先从独立的单感官学习范式中提取,然后集成到多感官分类中。实验结果表明,与其他不使用基序的传统SNN相比,该MR-SNN具有更高的准确性和更强的鲁棒性。此外,提出的奖赏学习范式在生物学上是合理的,能够更好地解释视觉和听觉感觉信号不一致引起的认知麦格效应。

英文摘要 Network architectures and learning principles are key in forming complex functions in artificial neural networks (ANNs) and spiking neural networks (SNNs). SNNs are considered the new-generation artificial networks by incorporating more biological features than ANNs, including dynamic spiking neurons, functionally specified architectures, and efficient learning paradigms. In this paper, we propose a Motif-topology and Reward-learning improved SNN (MR-SNN) for efficient multi-sensory integration. MR-SNN contains 13 types of 3-node Motif topologies which are first extracted from independent single-sensory learning paradigms and then integrated for multi-sensory classification. The experimental results showed higher accuracy and stronger robustness of the proposed MR-SNN than other conventional SNNs without using Motifs. Furthermore, the proposed reward learning paradigm was biologically plausible and can better explain the cognitive McGurk effect caused by incongruent visual and auditory sensory signals.
邮件日期 2022年02月15日

346、基于模拟RRAM的脉冲神经网络中补偿异质性的硬件校准学习

  • Hardware calibrated learning to compensate heterogeneity in analog RRAM-based Spiking Neural Networks 时间:2022年02月10日 第一作者:Filippo Moro 链接.

摘要:脉冲神经网络(SNN)可以释放基于模拟电阻随机存取存储器(RRAM)的电路的全部功率,用于低功率信号处理。它们固有的计算稀疏性自然会带来能效效益。实现健壮SNN的主要挑战是模拟CMOS电路和RRAM技术的内在可变性(异质性)。在这项工作中,我们评估了使用130纳米技术节点设计和制造的基于RRAM的神经形态电路的性能和可变性。基于这些结果,我们提出了一种神经形态硬件校准(NHC)SNN,其中学习电路根据测量数据进行校准。我们表明,通过考虑片外学习阶段测量的异质性特征,NHC SNN可以自我校正其硬件非理想性,并学习以高精度解决基准任务。这项工作展示了如何应对神经元和突触的异质性,以提高时间任务中的分类准确性。

英文摘要 Spiking Neural Networks (SNNs) can unleash the full power of analog Resistive Random Access Memories (RRAMs) based circuits for low power signal processing. Their inherent computational sparsity naturally results in energy efficiency benefits. The main challenge implementing robust SNNs is the intrinsic variability (heterogeneity) of both analog CMOS circuits and RRAM technology. In this work, we assessed the performance and variability of RRAM-based neuromorphic circuits that were designed and fabricated using a 130\,nm technology node. Based on these results, we propose a Neuromorphic Hardware Calibrated (NHC) SNN, where the learning circuits are calibrated on the measured data. We show that by taking into account the measured heterogeneity characteristics in the off-chip learning phase, the NHC SNN self-corrects its hardware non-idealities and learns to solve benchmark tasks with high accuracy. This work demonstrates how to cope with the heterogeneity of neurons and synapses for increasing classification accuracy in temporal tasks.
注释 Preprint for ISCAS2022
邮件日期 2022年02月11日

345、通过加权神经元分配实现视觉位置识别的脉冲神经网络

  • Spiking Neural Networks for Visual Place Recognition via Weighted Neuronal Assignments 时间:2022年02月10日 第一作者:Somayeh Hussaini 链接.
注释 8 pages, 6 figures, IEEE Robotics and Automation Letters (RA-L), also accepted to IEEE International Conference on Robotics and Automation (ICRA 2022) Journal-ref: IEEE Robotics and Automation Letters 2022 DOI: 10.1109/LRA.2022.3149030
邮件日期 2022年02月11日

344、T-NGA:学习处理脉冲音频传感器事件的时态网络嫁接算法

  • T-NGA: Temporal Network Grafting Algorithm for Learning to Process Spiking Audio Sensor Events 时间:2022年02月07日 第一作者:Shu Wang 链接.

摘要:脉冲硅耳蜗传感器将声音编码为来自不同频率通道的异步脉冲流。由于缺少用于刺激耳蜗的标记训练数据集,因此很难根据这些传感器的输出训练深层神经网络。本文提出了一种称为时间网络嫁接算法(T-NGA)的自监督方法,该方法将一个基于谱图特征预训练的递归网络嫁接到耳蜗事件特征上。T-NGA训练只需要暂时对齐的音频频谱图和事件特征。我们的实验表明,嫁接网络的准确性与使用软件脉冲耳蜗模型中的事件从零开始训练语音识别任务的有监督网络的准确性相似。尽管有脉冲硅耳蜗电路的非理想性,但硅耳蜗脉冲记录的嫁接网络准确度仅比使用N-TIDIGITS18数据集的监督网络准确度低5%左右。T-NGA可以训练网络在没有大的标记峰值数据集的情况下处理峰值音频传感器事件。

英文摘要 Spiking silicon cochlea sensors encode sound as an asynchronous stream of spikes from different frequency channels. The lack of labeled training datasets for spiking cochleas makes it difficult to train deep neural networks on the outputs of these sensors. This work proposes a self-supervised method called Temporal Network Grafting Algorithm (T-NGA), which grafts a recurrent network pretrained on spectrogram features so that the network works with the cochlea event features. T-NGA training requires only temporally aligned audio spectrograms and event features. Our experiments show that the accuracy of the grafted network was similar to the accuracy of a supervised network trained from scratch on a speech recognition task using events from a software spiking cochlea model. Despite the circuit non-idealities of the spiking silicon cochlea, the grafted network accuracy on the silicon cochlea spike recordings was only about 5% lower than the supervised network accuracy using the N-TIDIGITS18 dataset. T-NGA can train networks to process spiking audio sensor events in the absence of large labeled spike datasets.
注释 5 pages, 4 figures; accepted at IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 2022
邮件日期 2022年02月08日

343、基于时域神经元的高能效高精度脉冲神经网络推理

  • Energy-Efficient High-Accuracy Spiking Neural Network Inference Using Time-Domain Neurons 时间:2022年02月04日 第一作者:Joonghyun Song 链接.

摘要:由于在流行的冯·诺依曼体系结构上实现人工神经网络的局限性,最近的研究提出了基于脉冲神经网络(SNN)的神经形态系统,以降低功耗和计算成本。然而,传统的基于电流镜或运算放大器的模拟电压域集成和触发(I&F)神经元电路会带来严重的问题,例如非线性或高功耗,从而降低SNN的推理精度或能量效率。为了同时实现高能量效率和高精度,本文提出了一种低功耗、高线性度的时域I&F神经元电路。在28nm CMOS工艺中设计和模拟,与传统的基于电流镜的神经元相比,该神经元在MNIST推理上的错误率降低了4.3倍以上。此外,所提出的神经元电路的功耗模拟为每个神经元0.230uW,比现有的电压域神经元低几个数量级。

英文摘要 Due to the limitations of realizing artificial neural networks on prevalent von Neumann architectures, recent studies have presented neuromorphic systems based on spiking neural networks (SNNs) to reduce power and computational cost. However, conventional analog voltage-domain integrate-and-fire (I&F) neuron circuits, based on either current mirrors or op-amps, pose serious issues such as nonlinearity or high power consumption, thereby degrading either inference accuracy or energy efficiency of the SNN. To achieve excellent energy efficiency and high accuracy simultaneously, this paper presents a low-power highly linear time-domain I&F neuron circuit. Designed and simulated in a 28nm CMOS process, the proposed neuron leads to more than 4.3x lower error rate on the MNIST inference over the conventional current-mirror-based neurons. In addition, the power consumed by the proposed neuron circuit is simulated to be 0.230uW per neuron, which is orders of magnitude lower than the existing voltage-domain neurons.
邮件日期 2022年02月07日

342、低延迟脉冲神经网络的优化电位初始化

  • Optimized Potential Initialization for Low-latency Spiking Neural Networks 时间:2022年02月03日 第一作者:Tong Bu 链接.

摘要:脉冲神经网络(SNN)因其低功耗、生物合理性和对抗性鲁棒性等独特特性而受到高度重视。训练深层SNN最有效的方法是通过ANN到SNN的转换,这在深层网络结构和大规模数据集中产生了最好的性能。然而,在准确性和延迟之间需要权衡。为了获得与原始神经网络一样的高精度,需要较长的模拟时间来匹配脉冲神经元的放电频率和模拟神经元的激活值,这阻碍了SNN的实际应用。在本文中,我们的目标是以极低的延迟(少于32个时间步)实现高性能的转换SNN。首先,我们从理论上分析了ANN到SNN的转换,并表明调整阈值确实起到了与权重标准化类似的作用。我们没有引入以模型容量为代价促进ANN到SNN转换的约束,而是采用了一种更直接的方法,通过优化初始膜电位来减少每层的转换损失。此外,我们证明了膜电位的最佳初始化可以实现预期的无误差ANN到SNN的转换。我们在CIFAR-10、CIFAR-100和ImageNet数据集上评估了我们的算法,并使用更少的时间步长实现了最先进的精度。例如,我们在CIFAR-10上以16个时间步长达到了93.38%的最高精度。此外,我们的方法还可以应用于其他ANN-SNN转换方法,并在时间步长较小时显著提高性能。

英文摘要 Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness. The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets. However, there is a trade-off between accuracy and latency. In order to achieve high precision as original ANNs, a long simulation time is needed to match the firing rate of a spiking neuron with the activation value of an analog neuron, which impedes the practical application of SNN. In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps). We start by theoretically analyzing ANN-to-SNN conversion and show that scaling the thresholds does play a similar role as weight normalization. Instead of introducing constraints that facilitate ANN-to-SNN conversion at the cost of model capacity, we applied a more direct way by optimizing the initial membrane potential to reduce the conversion loss in each layer. Besides, we demonstrate that optimal initialization of membrane potentials can implement expected error-free ANN-to-SNN conversion. We evaluate our algorithm on the CIFAR-10, CIFAR-100 and ImageNet datasets and achieve state-of-the-art accuracy, using fewer time-steps. For example, we reach top-1 accuracy of 93.38\% on CIFAR-10 with 16 time-steps. Moreover, our method can be applied to other ANN-SNN conversion methodologies and remarkably promote performance when the time-steps is small.
注释 Accepted by AAAI 2022
邮件日期 2022年02月04日

341、速率编码还是直接编码:哪一种更适合于精确、健壮和节能的脉冲神经网络?

  • Rate Coding or Direct Coding: Which One is Better for Accurate, Robust, and Energy-efficient Spiking Neural Networks? 时间:2022年01月31日 第一作者:Youngeun Kim 链接.

摘要:最近的脉冲神经网络(SNN)工作主要集中在图像分类任务上,因此人们提出了各种编码技术来将图像转换为时间二进制脉冲。其中,速率编码和直接编码被认为是构建实用SNN系统的潜在候选,因为它们在大规模数据集上显示了最先进的性能。尽管使用了这两种编码方案,但人们很少注意以公平的方式比较这两种编码方案。在本文中,我们从三个角度对这两种编码进行了综合分析:准确性、对抗性鲁棒性和能源效率。首先,我们比较了两种编码技术在不同体系结构和数据集下的性能。然后,我们测量了编码技术对两种对抗性攻击方法的鲁棒性。最后,我们在数字硬件平台上比较了两种编码方案的能量效率。我们的结果表明,直接编码可以获得更好的精度,尤其是对于少量的时间步长。相比之下,由于不可微脉冲生成过程,速率编码对对抗性攻击表现出更好的鲁棒性。速率编码也比直接编码产生更高的能量效率,直接编码要求第一层具有多位精度。我们的研究探索了两种编码的特征,这是构建SNN的重要设计考虑因素。该代码可在以下网址获得:https://github.com/Intelligent-Computing-Lab-Yale/Rate-vs-Direct.

英文摘要 Recent Spiking Neural Networks (SNNs) works focus on an image classification task, therefore various coding techniques have been proposed to convert an image into temporal binary spikes. Among them, rate coding and direct coding are regarded as prospective candidates for building a practical SNN system as they show state-of-the-art performance on large-scale datasets. Despite their usage, there is little attention to comparing these two coding schemes in a fair manner. In this paper, we conduct a comprehensive analysis of the two codings from three perspectives: accuracy, adversarial robustness, and energy-efficiency. First, we compare the performance of two coding techniques with various architectures and datasets. Then, we measure the robustness of the coding techniques on two adversarial attack methods. Finally, we compare the energy-efficiency of two coding schemes on a digital hardware platform. Our results show that direct coding can achieve better accuracy especially for a small number of timesteps. In contrast, rate coding shows better robustness to adversarial attacks owing to the non-differentiable spike generation process. Rate coding also yields higher energy-efficiency than direct coding which requires multi-bit precision for the first layer. Our study explores the characteristics of two codings, which is an important design consideration for building SNNs. The code is made available at https://github.com/Intelligent-Computing-Lab-Yale/Rate-vs-Direct.
注释 Accepted to ICASSP2022
邮件日期 2022年02月08日

340、快速精确递归神经网络的脉冲激励秩编码

  • Spike-inspired Rank Coding for Fast and Accurate Recurrent Neural Networks 时间:2022年01月31日 第一作者:Alan Jeffares 链接.
注释 Spotlight paper at ICLR 2022
邮件日期 2022年02月01日

339、AutoSNN:走向节能的脉冲神经网络

  • AutoSNN: Towards Energy-Efficient Spiking Neural Networks 时间:2022年01月30日 第一作者:Byunggook Na 链接.

摘要:模拟大脑中信息传输的脉冲神经网络(SNN)可以通过离散和稀疏的脉冲有效地处理时空信息,因此受到了广泛关注。为了提高SNN的准确性和能效,以前的大多数研究都只关注训练方法,很少研究体系结构的影响。我们调查了之前研究中使用的设计选择,从准确性和脉冲数量方面考虑,发现它们并不最适合SNN。为了进一步提高准确性并减少SNN产生的脉冲,我们提出了一种脉冲感知神经结构搜索框架AutoSNN。我们定义了一个搜索空间,它由没有不良设计选择的架构组成。为了实现峰值感知体系结构搜索,我们引入了一种适应度,它同时考虑了峰值的准确性和数量。AutoSNN成功搜索在准确性和能效方面优于手工SNN的SNN架构。我们充分展示了AutoSNN在包括神经形态数据集在内的各种数据集上的有效性。

英文摘要 Spiking neural networks (SNNs) that mimic information transmission in the brain can energy-efficiently process spatio-temporal information through discrete and sparse spikes, thereby receiving considerable attention. To improve accuracy and energy efficiency of SNNs, most previous studies have focused solely on training methods, and the effect of architecture has rarely been studied. We investigate the design choices used in the previous studies in terms of the accuracy and number of spikes and figure out that they are not best-suited for SNNs. To further improve the accuracy and reduce the spikes generated by SNNs, we propose a spike-aware neural architecture search framework called AutoSNN. We define a search space consisting of architectures without undesirable design choices. To enable the spike-aware architecture search, we introduce a fitness that considers both the accuracy and number of spikes. AutoSNN successfully searches for SNN architectures that outperform hand-crafted SNNs in accuracy and energy efficiency. We thoroughly demonstrate the effectiveness of AutoSNN on various datasets including neuromorphic datasets.
邮件日期 2022年02月01日

338、3D FlowNet:基于事件的3D表示光流估计

  • 3D-FlowNet: Event-based optical flow estimation with 3D representation 时间:2022年01月28日 第一作者:Haixin Sun 链接.

摘要:基于事件的摄像头可以跨越基于帧的摄像头的限制,用于重要任务,例如在低照度条件下自动驾驶汽车导航期间的高速运动检测。活动摄像机的高时间分辨率和高动态范围,使它们能够在快速运动和极端光线的情况下工作。然而,传统的计算机视觉方法,如深度神经网络,由于它们是异步和离散的,不能很好地适应处理事件数据。此外,传统的事件数据二维编码表示方法牺牲了时间分辨率。在本文中,我们首先通过将二维编码表示扩展到三维来改进二维编码表示,以更好地保留事件的时间分布。然后,我们提出了3D FlowNet,这是一种新的网络结构,可以根据新的编码方法处理3D输入表示和输出光流估计。采用自监督训练策略来弥补基于事件的摄像机缺乏标记数据集的不足。最后,利用多车辆立体事件摄像机(MVSEC)数据集对所提出的网络进行训练和评估。结果表明,我们的3D FlowNet在训练时间更短的情况下(与Spike FlowNet的100次相比,我们的3D FlowNet的训练时间为30次)优于最先进的训练方法。

英文摘要 Event-based cameras can overpass frame-based cameras limitations for important tasks such as high-speed motion detection during self-driving cars navigation in low illumination conditions. The event cameras' high temporal resolution and high dynamic range, allow them to work in fast motion and extreme light scenarios. However, conventional computer vision methods, such as Deep Neural Networks, are not well adapted to work with event data as they are asynchronous and discrete. Moreover, the traditional 2D-encoding representation methods for event data, sacrifice the time resolution. In this paper, we first improve the 2D-encoding representation by expanding it into three dimensions to better preserve the temporal distribution of the events. We then propose 3D-FlowNet, a novel network architecture that can process the 3D input representation and output optical flow estimations according to the new encoding methods. A self-supervised training strategy is adopted to compensate the lack of labeled datasets for the event-based camera. Finally, the proposed network is trained and evaluated with the Multi-Vehicle Stereo Event Camera (MVSEC) dataset. The results show that our 3D-FlowNet outperforms state-of-the-art approaches with less training epoch (30 compared to 100 of Spike-FlowNet).
邮件日期 2022年01月31日

337、基于神经形态跌倒检测和动作识别数据集的常规视觉模型基准测试

  • Benchmarking Conventional Vision Models on Neuromorphic Fall Detection and Action Recognition Dataset 时间:2022年01月28日 第一作者:Karthik Sivarama Krishnan 链接.

摘要:近几年来,基于神经形态视觉的传感器越来越受欢迎,因为它们能够以低功耗感知捕捉时空事件。这些传感器比传统摄像机记录事件或峰值,有助于保护被记录对象的隐私。根据像素亮度变化捕获这些事件,并使用时间、位置和像素强度变化信息对输出数据流进行编码。本文提出并测试了神经形态人类行为识别和跌倒检测数据集上微调的常规视觉模型的性能。来自动态视觉传感摄像机的时空事件流被编码成标准序列图像帧。这些视频帧用于基准测试传统的基于深度学习的体系结构。在这种提出的方法中,我们为这种动态视觉传感(DVS)应用微调了最先进的视觉模型,并将这些模型命名为DVS-R2+1D、DVS-CSN、DVS-C2D、DVS SlowFast、DVS-X3D和DVS MViT。通过比较这些模型的性能,我们发现当前最先进的基于MViT的体系结构DVS MViT优于所有其他模型,精确度为0.958,F-1分数为0.958。第二好的是DVS-C2D,精确度为0.916,F-1分数为0.916。第三名和第四名是DVS-R2+1D和DVS慢速,准确度分别为0.875和0.833,F-1得分分别为0.875和0.861。DVS-CSN和DVS-X3D是表现最差的模型,准确度分别为0.708和0.625,F1得分分别为0.722和0.625。

英文摘要 Neuromorphic vision-based sensors are gaining popularity in recent years with their ability to capture Spatio-temporal events with low power sensing. These sensors record events or spikes over traditional cameras which helps in preserving the privacy of the subject being recorded. These events are captured as per-pixel brightness changes and the output data stream is encoded with time, location, and pixel intensity change information. This paper proposes and benchmarks the performance of fine-tuned conventional vision models on neuromorphic human action recognition and fall detection datasets. The Spatio-temporal event streams from the Dynamic Vision Sensing cameras are encoded into a standard sequence image frames. These video frames are used for benchmarking conventional deep learning-based architectures. In this proposed approach, we fine-tuned the state-of-the-art vision models for this Dynamic Vision Sensing (DVS) application and named these models as DVS-R2+1D, DVS-CSN, DVS-C2D, DVS-SlowFast, DVS-X3D, and DVS-MViT. Upon comparing the performance of these models, we see the current state-of-the-art MViT based architecture DVS-MViT outperforms all the other models with an accuracy of 0.958 and an F-1 score of 0.958. The second best is the DVS-C2D with an accuracy of 0.916 and an F-1 score of 0.916. Third and Fourth are DVS-R2+1D and DVS-SlowFast with an accuracy of 0.875 and 0.833 and F-1 score of 0.875 and 0.861 respectively. DVS-CSN and DVS-X3D were the least performing models with an accuracy of 0.708 and 0.625 and an F1 score of 0.722 and 0.625 respectively.
注释 6 Pages, 2 Figures
邮件日期 2022年01月31日

336、二值化脉冲神经网络中死亡神经元与稀疏性之间的细线

  • The fine line between dead neurons and sparsity in binarized spiking neural networks 时间:2022年01月28日 第一作者:Jason K. Eshraghian 链接.

摘要:脉冲神经网络可以通过在时域中编码信息,或通过在更高精度的隐藏状态中处理离散化量来补偿量化误差。理论上,宽动态范围的状态空间可以将多个二值化输入累积在一起,从而提高单个神经元的表征能力。这可以通过增加放电阈值来实现,但如果阈值过高,稀疏的脉冲活动就会变成无脉冲发射。在本文中,我们建议使用“阈值退火”作为触发阈值的预热方法。我们发现,它可以使棘波在多个层面上传播,否则神经元将停止放电,这样做,尽管使用了二值化权重,但在四个不同的数据集上仍能获得极具竞争力的结果。源代码可在https://github.com/jeshraghian/snn-tha/

英文摘要 Spiking neural networks can compensate for quantization error by encoding information either in the temporal domain, or by processing discretized quantities in hidden states of higher precision. In theory, a wide dynamic range state-space enables multiple binarized inputs to be accumulated together, thus improving the representational capacity of individual neurons. This may be achieved by increasing the firing threshold, but make it too high and sparse spike activity turns into no spike emission. In this paper, we propose the use of `threshold annealing' as a warm-up method for firing thresholds. We show it enables the propagation of spikes across multiple layers where neurons would otherwise cease to fire, and in doing so, achieve highly competitive results on four diverse datasets, despite using binarized weights. Source code is available at https://github.com/jeshraghian/snn-tha/
邮件日期 2022年01月31日

335、具有替代梯度下降的元学习脉冲神经网络

  • Meta-learning Spiking Neural Networks with Surrogate Gradient Descent 时间:2022年01月26日 第一作者:Kenneth Stewart 链接.

摘要:在边缘和在线任务执行期间进行适应性“终身”学习是人工智能研究的理想目标。在这方面,实现脉冲神经网络(SNN)的神经形态硬件特别有吸引力,因为它们的实时、基于事件的局部计算范式使它们适合边缘实现和快速学习。然而,作为最先进SNN训练特征的长时间迭代学习与神经形态硬件的物理性质和实时操作不兼容。为了克服这些局限性,在深度学习中越来越多地使用元学习等双层学习。在这项工作中,我们使用替代梯度方法在SNN中演示了基于梯度的元学习,该方法近似于梯度估计的脉冲阈值函数。由于替代梯度可以二次可微,因此可以使用模型不可知元学习(MAML)等有效的二阶梯度元学习方法。我们表明,在基于事件的元数据集上,使用MAML的SNN元训练与使用MAML的传统ANN元训练的性能相匹配或超过。此外,我们还展示了元学习带来的特殊优势:快速学习,无需高精度权重或梯度。我们的研究结果强调了元学习技术如何成为在现实世界问题上部署神经形态学习技术的工具。

英文摘要 Adaptive "life-long" learning at the edge and during online task performance is an aspirational goal of AI research. Neuromorphic hardware implementing Spiking Neural Networks (SNNs) are particularly attractive in this regard, as their real-time, event-based, local computing paradigm makes them suitable for edge implementations and fast learning. However, the long and iterative learning that characterizes state-of-the-art SNN training is incompatible with the physical nature and real-time operation of neuromorphic hardware. Bi-level learning, such as meta-learning is increasingly used in deep learning to overcome these limitations. In this work, we demonstrate gradient-based meta-learning in SNNs using the surrogate gradient method that approximates the spiking threshold function for gradient estimations. Because surrogate gradients can be made twice differentiable, well-established, and effective second-order gradient meta-learning methods such as Model Agnostic Meta Learning (MAML) can be used. We show that SNNs meta-trained using MAML match or exceed the performance of conventional ANNs meta-trained with MAML on event-based meta-datasets. Furthermore, we demonstrate the specific advantages that accrue from meta-learning: fast learning without the requirement of high precision weights or gradients. Our results emphasize how meta-learning techniques can become instrumental for deploying neuromorphic learning technologies on real-world problems.
注释 Submitted to IOP Neuromorphic Computing and Engineering for peer review
邮件日期 2022年01月27日

334、BrainScaleS-2加速的混合可塑性神经形态系统

  • The BrainScaleS-2 accelerated neuromorphic system with hybrid plasticity 时间:2022年01月26日 第一作者:Christian Pehle 链接.

摘要:自从电子元件开始进行信息处理以来,神经系统就一直是计算原语组织的隐喻。当今的脑启发计算包括一类方法,从使用新型纳米设备进行计算到研究大规模神经形态架构,如TrueNorth、SpiNNaker、BrainScaleS、Tianjic和Loihi。虽然实现细节有所不同,但脉冲神经网络(有时被称为第三代神经网络)是用于对此类系统的计算建模的常见抽象。在这里,我们描述了BrainScaleS神经形态架构的第二代,强调了该架构支持的应用。它结合了一个定制的模拟加速器核心,支持仿生脉冲神经网络原语的加速物理仿真,以及一个紧密耦合的数字处理器和一个数字事件路由网络。

英文摘要 Since the beginning of information processing by electronic components, the nervous system has served as a metaphor for the organization of computational primitives. Brain-inspired computing today encompasses a class of approaches ranging from using novel nano-devices for computation to research into large-scale neuromorphic architectures, such as TrueNorth, SpiNNaker, BrainScaleS, Tianjic, and Loihi. While implementation details differ, spiking neural networks -- sometimes referred to as the third generation of neural networks -- are the common abstraction used to model computation with such systems. Here we describe the second generation of the BrainScaleS neuromorphic architecture, emphasizing applications enabled by this architecture. It combines a custom analog accelerator core supporting the accelerated physical emulation of bio-inspired spiking neural network primitives with a tightly coupled digital processor and a digital event-routing network.
注释 22 pages, 10 figures
邮件日期 2022年01月27日

333、S$^2$NN:用于训练节能单步神经网络的脉冲替代梯度的时间步长缩减

  • S$^2$NN: Time Step Reduction of Spiking Surrogate Gradients for Training Energy Efficient Single-Step Neural Networks 时间:2022年01月26日 第一作者:Kazuma Suetake 链接.

摘要:随着神经网络规模的增加,需要能够使其以较低的计算成本和能源效率运行的技术。从这些需求出发,人们提出了各种有效的神经网络模式,如脉冲神经网络(SNN)或二元神经网络(BNN)。然而,它们有一些棘手的缺点,比如推理精度和延迟降低。为了解决这些问题,我们提出了一种单步神经网络(S$^2$NN),这是一种计算成本低、精度高的节能神经网络。建议的S$^2$NN将隐藏层之间的信息通过脉冲处理为SNN。然而,它没有时间维度,因此在训练和推理阶段没有延迟。因此,与需要时间序列处理的SNN相比,建议的S$^2$NN具有更低的计算成本。然而S$^2$NN不能采用na“{i}由于脉冲的不可微性,我们采用了反向传播算法。我们通过将多时间步长SNN的替代梯度减少到单个时间步长,推导出了一个合适的神经元模型。我们通过实验证明,与现有的SNN和BNN神经元模型相比,获得的神经元模型使S$^2$NN能够更准确、更高效地进行训练。我们还表明,建议的S$^2$NN可以实现与全精度网络相当的精度,同时具有高能效。

英文摘要 As the scales of neural networks increase, techniques that enable them to run with low computational cost and energy efficiency are required. From such demands, various efficient neural network paradigms, such as spiking neural networks (SNNs) or binary neural networks (BNNs), have been proposed. However, they have sticky drawbacks, such as degraded inference accuracy and latency. To solve these problems, we propose a single-step neural network (S$^2$NN), an energy-efficient neural network with low computational cost and high precision. The proposed S$^2$NN processes the information between hidden layers by spikes as SNNs. Nevertheless, it has no temporal dimension so that there is no latency within training and inference phases as BNNs. Thus, the proposed S$^2$NN has a lower computational cost than SNNs that require time-series processing. However, S$^2$NN cannot adopt na\"{i}ve backpropagation algorithms due to the non-differentiability nature of spikes. We deduce a suitable neuron model by reducing the surrogate gradient for multi-time step SNNs to a single-time step. We experimentally demonstrated that the obtained neuron model enables S$^2$NN to train more accurately and energy-efficiently than existing neuron models for SNNs and BNNs. We also showed that the proposed S$^2$NN could achieve comparable accuracy to full-precision networks while being highly energy-efficient.
注释 19 pages, 5 figures
邮件日期 2022年01月27日

332、基于事件的电位辅助脉冲神经网络视频重建

  • Event-based Video Reconstruction via Potential-assisted Spiking Neural Network 时间:2022年01月25日 第一作者:Lin Zhu 链接.

摘要:神经形态视觉传感器是一种新的仿生成像模式,它报告称为“事件”的异步、连续每像素亮度变化,具有高时间分辨率和高动态范围。到目前为止,基于事件的图像重建方法都是基于人工神经网络(ANN)或手工制作的时空平滑技术。在本文中,我们首先通过全脉冲神经网络(SNN)结构实现图像重建工作。作为仿生神经网络,SNN在异步二进制峰值随时间分布的情况下运行,可能会在事件驱动硬件上带来更高的计算效率。我们提出了一种基于事件的视频重建框架,该框架基于一个完全脉冲神经网络(EVSNN),该网络利用了漏积分与激发(LIF)神经元和膜电位(MP)神经元。我们发现,脉冲神经元有潜力存储有用的时间信息(记忆),以完成这种时间相关的任务。此外,为了更好地利用时间信息,我们提出了一种混合电位辅助框架(PA-EVSNN),该框架利用了脉冲神经元的膜电位。该神经元称为自适应膜电位(AMP)神经元,它根据输入峰值自适应地更新膜电位。实验结果表明,我们的模型在IJRR、MVSEC和HQF数据集上的性能与基于ANN的模型相当。EVSNN和PA-EVSNN的能耗分别比其ANN结构的计算效率高19.36$\倍和7.75$\倍。

英文摘要 Neuromorphic vision sensor is a new bio-inspired imaging paradigm that reports asynchronous, continuously per-pixel brightness changes called `events' with high temporal resolution and high dynamic range. So far, the event-based image reconstruction methods are based on artificial neural networks (ANN) or hand-crafted spatiotemporal smoothing techniques. In this paper, we first implement the image reconstruction work via fully spiking neural network (SNN) architecture. As the bio-inspired neural networks, SNNs operating with asynchronous binary spikes distributed over time, can potentially lead to greater computational efficiency on event-driven hardware. We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron. We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks. Furthermore, to better utilize the temporal information, we propose a hybrid potential-assisted framework (PA-EVSNN) using the membrane potential of spiking neuron. The proposed neuron is referred as Adaptive Membrane Potential (AMP) neuron, which adaptively updates the membrane potential according to the input spikes. The experimental results demonstrate that our models achieve comparable performance to ANN-based models on IJRR, MVSEC, and HQF datasets. The energy consumptions of EVSNN and PA-EVSNN are 19.36$\times$ and 7.75$\times$ more computationally efficient than their ANN architectures, respectively.
邮件日期 2022年01月27日

331、神经体系结构寻找脉冲神经网络

  • Neural Architecture Search for Spiking Neural Networks 时间:2022年01月23日 第一作者:Youngeun Kim 链接.

摘要:脉冲神经网络(SNN)由于其固有的高稀疏性激活,作为传统人工神经网络(ANN)的一种潜在的节能替代方案,受到了广泛关注。然而,大多数以前的SNN方法使用类似ANN的体系结构(例如VGG网络或ResNet),这可能为SNN中二进制信息的时序处理提供次优性能。为了解决这个问题,在本文中,我们介绍了一种新的神经结构搜索(NAS)方法来寻找更好的SNN结构。受最近从初始化时的激活模式中找到最佳体系结构的NAS方法的启发,我们选择的体系结构可以在不经过训练的情况下跨不同的数据样本表示不同的脉冲激活模式。此外,为了利用峰值之间的时间相关性,我们在层之间搜索前馈连接和反向连接(即时间反馈连接)。有趣的是,通过我们的搜索算法发现的SNASNet在向后连接的情况下实现了更高的性能,这表明了设计SNN体系结构以适当使用时态信息的重要性。我们在三个图像识别基准上进行了大量实验,结果表明,SNASNet以显著更低的时间步长(5个时间步长)实现了最先进的性能。

英文摘要 Spiking Neural Networks (SNNs) have gained huge attention as a potential energy-efficient alternative to conventional Artificial Neural Networks (ANNs) due to their inherent high-sparsity activation. However, most prior SNN methods use ANN-like architectures (e.g., VGG-Net or ResNet), which could provide sub-optimal performance for temporal sequence processing of binary information in SNNs. To address this, in this paper, we introduce a novel Neural Architecture Search (NAS) approach for finding better SNN architectures. Inspired by recent NAS approaches that find the optimal architecture from activation patterns at initialization, we select the architecture that can represent diverse spike activation patterns across different data samples without training. Furthermore, to leverage the temporal correlation among the spikes, we search for feed forward connections as well as backward connections (i.e., temporal feedback connections) between layers. Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information. We conduct extensive experiments on three image recognition benchmarks where we show that SNASNet achieves state-of-the-art performance with significantly lower timesteps (5 timesteps).
邮件日期 2022年01月26日

330、使用普通设备的摄像头和机器视觉速度提高1000倍

  • 1000x Faster Camera and Machine Vision with Ordinary Devices 时间:2022年01月23日 第一作者:Tiejun Huang 链接.

摘要:在数码相机中,我们发现了一个主要的局限性:从胶片相机继承的图像和视频形式阻碍了它捕捉快速变化的光子世界。在这里,我们介绍了vidar,一种位序列阵列,其中每个位表示光子的累积是否已达到阈值,以记录和重建任何时刻的场景辐射。通过仅使用消费级CMOS传感器和集成电路,我们开发了一款比传统相机快1000倍的vidar相机。通过将vidar视为生物视觉中的脉冲序列,我们进一步开发了一个基于脉冲神经网络的机器视觉系统,该系统将机器的速度与生物视觉的机制结合起来,实现了比人类视觉快1000倍的高速目标检测和跟踪。我们展示了vidar摄像机和super vision系统在辅助裁判和目标指向系统中的应用。我们的研究有望从根本上彻底改变图像和视频概念及相关行业,包括摄影、电影和视觉媒体,并开启一个新的神经网络驱动的无速度机器视觉时代。

英文摘要 In digital cameras, we find a major limitation: the image and video form inherited from a film camera obstructs it from capturing the rapidly changing photonic world. Here, we present vidar, a bit sequence array where each bit represents whether the accumulation of photons has reached a threshold, to record and reconstruct the scene radiance at any moment. By employing only consumer-level CMOS sensors and integrated circuits, we have developed a vidar camera that is 1,000x faster than conventional cameras. By treating vidar as spike trains in biological vision, we have further developed a spiking neural network-based machine vision system that combines the speed of the machine and the mechanism of biological vision, achieving high-speed object detection and tracking 1,000x faster than human vision. We demonstrate the utility of the vidar camera and the super vision system in an assistant referee and target pointing system. Our study is expected to fundamentally revolutionize the image and video concepts and related industries, including photography, movies, and visual media, and to unseal a new spiking neural network-enabled speed-free machine vision era.
邮件日期 2022年01月25日

329、深度强化学习与脉冲Q学习

  • Deep Reinforcement Learning with Spiking Q-learning 时间:2022年01月21日 第一作者:Ding Chen 链接.

摘要:在特殊的神经形态硬件的帮助下,脉冲神经网络(SNN)有望以更低的能耗实现人工智能。将SNN和深度强化学习(RL)相结合,为实际控制任务提供了一种有前途的节能方法。目前,基于SNN的RL方法很少。大多数算法要么缺乏泛化能力,要么在训练中使用人工神经网络(ANN)来估计值函数。前者需要为每个场景调整大量的超参数,而后者限制了不同类型的RL算法的应用,忽略了训练中的巨大能量消耗。为了开发一种鲁棒的基于脉冲的RL方法,我们从昆虫中发现的非脉冲中间神经元中汲取灵感,提出了深脉冲Q网络(DSQN),使用非脉冲神经元的膜电压作为Q值的表示,它可以直接使用端到端RL从高维感觉输入学习鲁棒策略。在17个Atari游戏上进行的实验表明,DSQN在大多数游戏中都优于基于人工神经网络的深度Q网络(DQN)。此外,实验结果表明,DSQN具有良好的学习稳定性和对抗性攻击的鲁棒性。

英文摘要 With the help of special neuromorphic hardware, spiking neural networks (SNNs) are expected to realize artificial intelligence with less energy consumption. It provides a promising energy-efficient way for realistic control tasks by combing SNNs and deep reinforcement learning (RL). There are only a few existing SNN-based RL methods at present. Most of them either lack generalization ability or employ Artificial Neural Networks (ANNs) to estimate value function in training. The former needs to tune numerous hyper-parameters for each scenario, and the latter limits the application of different types of RL algorithm and ignores the large energy consumption in training. To develop a robust spike-based RL method, we draw inspiration from non-spiking interneurons found in insects and propose the deep spiking Q-network (DSQN), using the membrane voltage of non-spiking neurons as the representation of Q-value, which can directly learn robust policies from high-dimensional sensory inputs using end-to-end RL. Experiments conducted on 17 Atari games demonstrate the effectiveness of DSQN by outperforming the ANN-based deep Q-network (DQN) in most games. Moreover, the experimental results show superior learning stability and robustness to adversarial attacks of DSQN.
邮件日期 2022年01月25日

328、神经形态混合脉冲运动检测器

  • NeuroHSMD: Neuromorphic Hybrid Spiking Motion Detector 时间:2022年01月19日 第一作者:Pedro Machado 链接.
邮件日期 2022年01月21日

327、POPPINS:一种基于群体的数字脉冲神经形态处理器,具有整数二次积分和激发神经元

  • POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with Integer Quadratic Integrate-and-Fire Neurons 时间:2022年01月19日 第一作者:Zuo-Wei Yeh 链接.

摘要:人脑作为生物处理系统的内部运作在很大程度上仍然是个谜。受人脑功能的启发,并基于对果蝇等其他物种的简单神经网络系统的分析,神经形态计算系统吸引了相当大的兴趣。在细胞水平的连接组学研究中,我们可以识别生物神经网络(称为群体)的特征,这些特征不仅构成网络中的循环完整连接,还构成每个神经元中的外部刺激和自我连接。基于网络中脉冲传输和输入数据的低数据带宽,脉冲神经网络具有低延迟和低功耗的设计。在这项研究中,我们提出了一种可配置的基于群体的数字脉冲神经形态处理器,采用180nm处理技术,具有两个可配置的层次群体。此外,处理器中的这些神经元可以配置为新模型、整数二次积分和fire神经元模型,其中包含一个无符号的8位膜电位值。该处理器可以实时执行智能决策以避免事故。此外,该方法还可以开发仿生神经形态系统和各种低功耗、低延迟的推理处理应用。

英文摘要 The inner operations of the human brain as a biological processing system remain largely a mystery. Inspired by the function of the human brain and based on the analysis of simple neural network systems in other species, such as Drosophila, neuromorphic computing systems have attracted considerable interest. In cellular-level connectomics research, we can identify the characteristics of biological neural network, called population, which constitute not only recurrent fullyconnection in network, also an external-stimulus and selfconnection in each neuron. Relying on low data bandwidth of spike transmission in network and input data, Spiking Neural Networks exhibit low-latency and low-power design. In this study, we proposed a configurable population-based digital spiking neuromorphic processor in 180nm process technology with two configurable hierarchy populations. Also, these neurons in the processor can be configured as novel models, integer quadratic integrate-and-fire neuron models, which contain an unsigned 8-bit membrane potential value. The processor can implement intelligent decision making for avoidance in real-time. Moreover, the proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
邮件日期 2022年01月20日

326、时态计算机组织

  • Temporal Computer Organization 时间:2022年01月19日 第一作者:James E. Smith 链接.

摘要:本文档重点介绍在使用时间瞬变进行通信和计算的技术中实现的计算系统。虽然用一般术语描述,但脉冲神经网络的实现是主要的兴趣。作为背景,本文总结了一个构造时态网络的代数。然后,描述了由同步段组成的系统组织。这些段在内部进行前馈,并在段之间进行反馈。同步时钟在每个计算步骤或周期结束时重置网络段。在其基本形式中,同步时钟仅执行重置功能。在神经网络的背景下,这满足了生物学的合理性。然而,功能完整性受到限制。通过允许将同步时钟用作作为时间参考值的附加功能输入,消除了该限制。

英文摘要 This document is focused on computing systems implemented in technologies that communicate and compute with temporal transients. Although described in general terms, implementations of spiking neural networks are of primary interest. As background, an algebra for constructing temporal networks is summarized. Then, a system organization consisting of synchronized segments is described. The segments are feedforward internally with feedback between segments. A synchronizing clock resets network segments at the end of each computation step or cycle. In its basic form, the synchronizing clock merely performs a reset function. In the context of neural networks, this satisfies biological plausibility. However, functional completeness is restricted. This restriction is removed by allowing use of the synchronizing clock as an additional function input that acts as a temporal reference value.
邮件日期 2022年01月20日

325、脉冲神经网络的FPGA优化硬件加速

  • FPGA-optimized Hardware acceleration for Spiking Neural Networks 时间:2022年01月18日 第一作者:Alessio Carpegna 链接.

摘要:人工智能(AI)在许多不同的任务中获得了成功和重要性。人工智能系统的日益普及和复杂性促使研究人员开发专用硬件加速器。脉冲神经网络(SNN)在这个意义上代表了一个有前途的解决方案,因为它们实现的模型更适合可靠的硬件设计。此外,从神经科学的角度来看,它们更好地模仿人脑。这项工作提出了一种用于SNN的硬件加速器的开发,该加速器带有离线训练,应用于图像识别任务,使用MNIST作为目标数据集。许多技术被用于最小化面积和最大化性能,例如用简单的位移位替换乘法运算,以及最小化花费在非活动脉冲上的时间,这对于更新神经元的内部状态是无用的。该设计以Xilinx Artix-7 FPGA为目标,总共使用了约40%的可用硬件资源,并将分类时间减少了三个数量级,与全精度软件相比,对精度的影响较小,为4.5%。

英文摘要 Artificial intelligence (AI) is gaining success and importance in many different tasks. The growing pervasiveness and complexity of AI systems push researchers towards developing dedicated hardware accelerators. Spiking Neural Networks (SNN) represent a promising solution in this sense since they implement models that are more suitable for a reliable hardware design. Moreover, from a neuroscience perspective, they better emulate a human brain. This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task, using the MNIST as the target dataset. Many techniques are used to minimize the area and to maximize the performance, such as the replacement of the multiplication operation with simple bit shifts and the minimization of the time spent on inactive spikes, useless for the update of neurons' internal state. The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources and reducing the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
注释 6 pages, 5 figures
邮件日期 2022年01月19日

324、利用深度学习的经验教训训练脉冲神经网络

  • Training Spiking Neural Networks Using Lessons From Deep Learning 时间:2022年01月14日 第一作者:Jason K. Eshraghian 链接.
邮件日期 2022年01月19日

323、将STDP引入多层递归脉冲神经网络

  • Including STDP to eligibility propagation in multi-layer recurrent spiking neural networks 时间:2022年01月05日 第一作者:Werner van der Veen 链接.

摘要:与基于深度学习的方法相比,神经形态系统中的脉冲神经网络(Spiking neural networks,SNN)具有更高的能量效率,但目前还没有明确的竞争学习算法来训练此类SNN。资格传播(e-prop)提供了一种在低功耗神经形态硬件中训练有竞争力的复发性SNN的有效且生物学上合理的方法。在本报告中,再现了e-prop在语音分类任务中的先前表现,并分析了包含STDP样行为的影响。在ALIF神经元模型中加入STDP可以提高分类性能,但Izhikevich e-prop神经元的情况并非如此。最后,我们发现在单层循环SNN中实现的e-prop始终优于多层变体。

英文摘要 Spiking neural networks (SNNs) in neuromorphic systems are more energy efficient compared to deep learning-based methods, but there is no clear competitive learning algorithm for training such SNNs. Eligibility propagation (e-prop) offers an efficient and biologically plausible way to train competitive recurrent SNNs in low-power neuromorphic hardware. In this report, previous performance of e-prop on a speech classification task is reproduced, and the effects of including STDP-like behavior are analyzed. Including STDP to the ALIF neuron model improves the classification performance, but this is not the case for the Izhikevich e-prop neuron. Finally, it was found that e-prop implemented in a single-layer recurrent SNN consistently outperforms a multi-layer variant.
邮件日期 2022年01月20日

322、具有时间截断局部反向传播的脉冲神经网络的有效训练

  • Efficient Training of Spiking Neural Networks with Temporally-Truncated Local Backpropagation through Time 时间:2021年12月13日 第一作者:Wenzhe Guo 链接.

摘要:由于复杂的神经动力学和触发函数的内在不可微性,直接训练脉冲神经网络(SNN)仍然具有挑战性。为训练SNN而提出的著名的时间反向传播(backpropagation through time,BPTT)算法存在较大的内存占用,并且禁止反向和更新解锁,因此无法利用局部监督训练方法的潜力。本文提出了一种高效、直接的SNN训练算法,该算法将局部监督训练方法与时间截断的BPTT算法相结合。该算法探索了BPTT中的时间和空间局部性,有助于显著降低计算成本,包括GPU内存利用率、主存访问和算术运算。我们深入探讨了与时间截断长度和局部训练块大小有关的设计空间,并测试了它们对运行不同类型任务的不同网络的分类精度的影响。结果表明,时间截断对基于帧的数据集的分类精度有负面影响,但会提高动态视觉传感器(DVS)记录数据集的分类精度。尽管会导致信息丢失,但本地培训能够缓解过度拟合。时间截断和局部训练的联合作用可以减缓精度下降,甚至提高精度。此外,与标准端到端BPTT相比,对AlexNet分类CIFAR10-DVS数据集等深层SNS模型进行训练,可使精确度提高7.26%,GPU内存减少89.94%,内存访问减少10.79%,MAC操作减少99.64%。

英文摘要 Directly training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions. The well-known backpropagation through time (BPTT) algorithm proposed to train SNNs suffers from large memory footprint and prohibits backward and update unlocking, making it impossible to exploit the potential of locally-supervised training methods. This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm. The proposed algorithm explores both temporal and spatial locality in BPTT and contributes to significant reduction in computational cost including GPU memory utilization, main memory access and arithmetic operations. We thoroughly explore the design space concerning temporal truncation length and local training block size and benchmark their impact on classification accuracy of different networks running different types of tasks. The results reveal that temporal truncation has a negative effect on the accuracy of classifying frame-based datasets, but leads to improvement in accuracy on dynamic-vision-sensor (DVS) recorded datasets. In spite of resulting information loss, local training is capable of alleviating overfitting. The combined effect of temporal truncation and local training can lead to the slowdown of accuracy drop and even improvement in accuracy. In addition, training deep SNNs models such as AlexNet classifying CIFAR10-DVS dataset leads to 7.26% increase in accuracy, 89.94% reduction in GPU memory, 10.79% reduction in memory access, and 99.64% reduction in MAC operations compared to the standard end-to-end BPTT.
注释 16
邮件日期 2022年01月20日

321、通过直接训练的深度脉冲Q网络实现人的水平控制

  • Human-Level Control through Directly-Trained Deep Spiking Q-Networks 时间:2021年12月13日 第一作者:Guisong Liu 链接.

摘要:作为第三代神经网络,脉冲神经网络(Spiking neural networks,SNNs)由于具有较高的能量效率,在神经形态硬件方面有着巨大的潜力。然而,由于二进制输出和脉冲函数的不可微性,深度脉冲强化学习(DSRL)即基于SNN的强化学习(RL)仍处于初级阶段。为了解决这些问题,本文提出了一种深度脉冲Q网络(DSQN)。具体来说,我们提出了一种基于泄漏集成与激发(LIF)神经元和深度Q网络(DQN)的直接训练深度脉冲强化学习体系结构。然后,我们对深度脉冲Q网络采用了直接脉冲学习算法。我们进一步从理论上证明了在DSQN中使用LIF神经元的优势。在17款表现最好的Atari游戏上进行了综合实验,将我们的方法与最先进的转换方法进行比较。实验结果证明了该方法在性能、稳定性、鲁棒性和能量效率方面的优越性。据我们所知,我们的工作是第一个通过直接训练的SNN在多个Atari游戏中实现最先进性能的工作。

英文摘要 As the third-generation neural networks, Spiking Neural Networks (SNNs) have great potential on neuromorphic hardware because of their high energy-efficiency. However, Deep Spiking Reinforcement Learning (DSRL), i.e., the Reinforcement Learning (RL) based on SNNs, is still in its preliminary stage due to the binary output and the non-differentiable property of the spiking function. To address these issues, we propose a Deep Spiking Q-Network (DSQN) in this paper. Specifically, we propose a directly-trained deep spiking reinforcement learning architecture based on the Leaky Integrate-and-Fire (LIF) neurons and Deep Q-Network (DQN). Then, we adapt a direct spiking learning algorithm for the Deep Spiking Q-Network. We further demonstrate the advantages of using LIF neurons in DSQN theoretically. Comprehensive experiments have been conducted on 17 top-performing Atari games to compare our method with the state-of-the-art conversion method. The experimental results demonstrate the superiority of our method in terms of performance, stability, robustness and energy-efficiency. To the best of our knowledge, our work is the first one to achieve state-of-the-art performance on multiple Atari games with the directly-trained SNN.
邮件日期 2022年01月20日

320、通过解决脉冲神经网络中的退化问题来推进深度剩余学习

  • Advancing Deep Residual Learning by Solving the Crux of Degradation in Spiking Neural Networks 时间:2021年12月09日 第一作者:Yifan Hu 链接.

摘要:尽管神经形态计算的发展很快,但脉冲神经网络(SNN)的深度不足和表现力不足严重限制了其实际应用范围。剩余学习和捷径已被证明是训练深层神经网络的一种重要方法,但之前的工作很少评估它们对基于棘波的通信和时空动力学特征的适用性。这种疏忽导致信息流动受阻,并伴随着降级问题。在本文中,我们确定了关键点,然后提出了一种新的SNN剩余块,它能够显著扩展直接训练的SNN的深度,例如,在CIFAR-10上多达482层,在ImageNet上多达104层,而不会观察到任何轻微的退化问题。我们在基于帧和神经形态的数据集上验证了我们的方法的有效性,我们的SRM-ResNet104在ImageNet上获得了76.02%的优越结果,这是在直接训练的SNN领域中的首次。估计了巨大的能量效率,由此产生的网络平均每个神经元只需要一个脉冲来对输入样本进行分类。我们相信,我们强大且可扩展的建模将为SNN的进一步探索提供强有力的支持。

英文摘要 Despite the rapid progress of neuromorphic computing, the inadequate depth and the resulting insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice. Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks, but rarely did previous work assess their applicability to the characteristics of spike-based communication and spatiotemporal dynamics. This negligence leads to impeded information flow and the accompanying degradation problem. In this paper, we identify the crux and then propose a novel residual block for SNNs, which is able to significantly extend the depth of directly trained SNNs, e.g., up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem. We validate the effectiveness of our methods on both frame-based and neuromorphic datasets, and our SRM-ResNet104 achieves a superior result of 76.02% accuracy on ImageNet, the first time in the domain of directly trained SNNs. The great energy efficiency is estimated and the resulting networks need on average only one spike per neuron for classifying an input sample. We believe our powerful and scalable modeling will provide a strong support for further exploration of SNNs.
注释 arXiv admin note: substantial text overlap with arXiv:2112.08954
邮件日期 2022年01月20日

319、稀疏脉冲梯度下降

  • Sparse Spiking Gradient Descent 时间:2022年01月13日 第一作者:Nicolas Perez-Nieves 链接.
邮件日期 2022年01月14日

318、利用基于时间的神经元提高脉冲神经网络的精度

  • Improving Spiking Neural Network Accuracy Using Time-based Neurons 时间:2022年01月05日 第一作者:Hanseok Kim 链接.

摘要:由于von Neumann体系结构下运行深度学习模型在降低功耗方面的基本限制,基于模拟神经元的低功耗脉冲神经网络的神经形态计算系统的研究备受关注。为了集成大量神经元,神经元需要设计成占据一个小的区域,但随着技术规模的缩小,模拟神经元很难扩展,并且它们的电压余量/动态范围和电路非线性都会降低。有鉴于此,本文首先对现有基于电流镜的电压域神经元在28nm工艺中的非线性行为进行了建模,并表明神经元的非线性效应会严重降低SNN推理精度。然后,为了缓解这个问题,我们提出了一种新的神经元,它在时域处理传入的脉冲,并大大提高了线性度,从而与现有的电压域神经元相比提高了推理精度。在MNIST数据集上测试,该神经元的推理错误率与理想神经元的推理错误率相差不到0.1%。

英文摘要 Due to the fundamental limit to reducing power consumption of running deep learning models on von-Neumann architecture, research on neuromorphic computing systems based on low-power spiking neural networks using analog neurons is in the spotlight. In order to integrate a large number of neurons, neurons need to be designed to occupy a small area, but as technology scales down, analog neurons are difficult to scale, and they suffer from reduced voltage headroom/dynamic range and circuit nonlinearities. In light of this, this paper first models the nonlinear behavior of existing current-mirror-based voltage-domain neurons designed in a 28nm process, and show SNN inference accuracy can be severely degraded by the effect of neuron's nonlinearity. Then, to mitigate this problem, we propose a novel neuron, which processes incoming spikes in the time domain and greatly improves the linearity, thereby improving the inference accuracy compared to the existing voltage-domain neuron. Tested on the MNIST dataset, the inference error rate of the proposed neuron differs by less than 0.1% from that of the ideal neuron.
邮件日期 2022年01月06日

317、一种基于感受野的鲁棒视觉采样模型

  • A Robust Visual Sampling Model Inspired by Receptive Field 时间:2022年01月04日 第一作者:Liwen Hu 链接.

摘要:模拟视网膜**凹的脉冲相机可以通过发射脉冲来报告每像素亮度强度的累积。作为一种具有高时间分辨率的仿生视觉传感器,它在计算机视觉领域有着巨大的潜力。然而,现有的Spike相机的采样模型容易受到量化和噪声的影响,无法有效地捕捉物体的纹理细节。在这项工作中,受感受野(RVSM)的启发,提出了一种鲁棒的视觉采样模型,该模型使用高斯差分(DoG)和高斯滤波器生成的小波滤波器来模拟感受野。利用类似于小波逆变换的方法,可以将RVSM的峰值数据转换成图像。为了测试性能,我们还提出了一个包含各种运动场景的高速运动峰值数据集(HMD)。通过比较HMD中的重建图像,我们发现RVSM可以大大提高脉冲相机的信息捕获能力。更重要的是,RVSM模仿感受野机制采集区域信息,能够有效滤除高强度噪声,大大改善了脉冲相机对噪声敏感的问题。此外,由于采样结构的强泛化性,RVSM也适用于其他神经形态视觉传感器。以上实验是在一个Spike摄像机模拟器上完成的。

英文摘要 Spike camera mimicking the retina fovea can report per-pixel luminance intensity accumulation by firing spikes. As a bio-inspired vision sensor with high temporal resolution, it has a huge potential for computer vision. However, the sampling model in current Spike camera is so susceptible to quantization and noise that it cannot capture the texture details of objects effectively. In this work, a robust visual sampling model inspired by receptive field (RVSM) is proposed where wavelet filter generated by difference of Gaussian (DoG) and Gaussian filter are used to simulate receptive field. Using corresponding method similar to inverse wavelet transform, spike data from RVSM can be converted into images. To test the performance, we also propose a high-speed motion spike dataset (HMD) including a variety of motion scenes. By comparing reconstructed images in HMD, we find RVSM can improve the ability of capturing information of Spike camera greatly. More importantly, due to mimicking receptive field mechanism to collect regional information, RVSM can filter high intensity noise effectively and improves the problem that Spike camera is sensitive to noise largely. Besides, due to the strong generalization of sampling structure, RVSM is also suitable for other neuromorphic vision sensor. Above experiments are finished in a Spike camera simulator.
邮件日期 2022年01月05日

316、通过正则化和归一化改进脉冲神经网络中的代理梯度学习

  • Improving Surrogate Gradient Learning in Spiking Neural Networks via Regularization and Normalization 时间:2021年12月13日 第一作者:N 链接.

摘要:脉冲神经网络(SNN)不同于深度学习中使用的经典网络:神经元使用称为脉冲的电脉冲进行通信,就像生物神经元一样。SNN对人工智能技术很有吸引力,因为它们可以在低功耗的神经形态芯片上实现。然而,SNN通常比其模拟对应物精度更低。在本报告中,我们研究了各种正则化和规范化技术,目的是改进SNN中的替代梯度学习。

英文摘要 Spiking neural networks (SNNs) are different from the classical networks used in deep learning: the neurons communicate using electrical impulses called spikes, just like biological neurons. SNNs are appealing for AI technology, because they could be implemented on low power neuromorphic chips. However, SNNs generally remain less accurate than their analog counterparts. In this report, we examine various regularization and normalization techniques with the goal of improving surrogate gradient learning in SNNs.
注释 Bachelor Thesis
邮件日期 2022年01月10日

315、可塑性函数对神经装配的影响

  • Effects of Plasticity Functions on Neural Assemblies 时间:2021年12月29日 第一作者:Christodoulos Constantinides 链接.

摘要:我们探讨了各种可塑性功能对神经元组装的影响。为了弥合实验理论和计算理论之间的鸿沟,我们使用了一个概念框架,即组装演算,它是一个基于神经元组装描述大脑功能的正式系统。集合演算包括投射、关联和合并神经元集合的操作。我们的研究重点是用装配演算模拟不同的塑性函数。我们的主要贡献是修改和评估投影操作。我们用Oja和脉冲时间依赖塑性(STDP)规则进行实验,并测试各种超参数的影响。

英文摘要 We explore the effects of various plasticity functions on assemblies of neurons. To bridge the gap between experimental and computational theories we make use of a conceptual framework, the Assembly Calculus, which is a formal system for the description of brain function based on assemblies of neurons. The Assembly Calculus includes operations for projecting, associating, and merging assemblies of neurons. Our research is focused on simulating different plasticity functions with Assembly Calculus. Our main contribution is the modification and evaluation of the projection operation. We experiment with Oja's and Spike Time-Dependent Plasticity (STDP) rules and test the effect of various hyper-parameters.
邮件日期 2022年01月03日

314、硅神经元事件计时的可靠性

  • Reliability of Event Timing in Silicon Neurons 时间:2021年12月28日 第一作者:Tai Miyazaki Kirby 链接.

摘要:模拟低压电子学在以前所未有的能源效率生产硅神经元(SIN)方面显示出巨大的潜力。然而,它们对工艺、电压和温度(PVT)变化以及噪声的固有高敏感性长期以来被认为是开发有效神经形态解决方案的主要瓶颈。受生物物理新皮质神经元棘波传导研究的启发,我们证明了与生物神经元类似,模拟捷联惯导系统中固有的噪声和可变性可以与可靠的棘波传导共存。我们通过展示三种不同类型的可靠事件传输:单脉冲传输、突发传输和半中心振荡器(HCO)网络的开关控制,在最近的爆破神经元的神经形态模型上说明了这一特性。

英文摘要 Analog, low-voltage electronics show great promise in producing silicon neurons (SiNs) with unprecedented levels of energy efficiency. Yet, their inherently high susceptibility to process, voltage and temperature (PVT) variations, and noise has long been recognised as a major bottleneck in developing effective neuromorphic solutions. Inspired by spike transmission studies in biophysical, neocortical neurons, we demonstrate that the inherent noise and variability can coexist with reliable spike transmission in analog SiNs, similarly to biological neurons. We illustrate this property on a recent neuromorphic model of a bursting neuron by showcasing three different relevant types of reliable event transmission: single spike transmission, burst transmission, and the on-off control of a half-centre oscillator (HCO) network.
邮件日期 2021年12月30日

313、N-Omniglot:一个用于时空稀疏少镜头学习的大规模数据集

  • N-Omniglot: a Large-scale Dataset for Spatio-Temporal Sparse Few-shot Learning 时间:2021年12月25日 第一作者:Yang Li 链接.

摘要:少镜头学习是人脑最重要的能力之一。然而,目前的人工智能系统在实现这一能力方面遇到了困难,生物上合理的脉冲神经网络(SNN)也是如此。传统少数镜头学习领域的数据集提供的时间信息量很少。而神经形态数据集的缺乏阻碍了SNN少数镜头学习的发展。在这里,我们使用动态视觉传感器(DVS)提供了第一个神经形态数据集:N-Omniglot。它包含1623类手写字符,每类只有20个样本。N-Omniglot消除了对SNN的神经形态数据集的需要,该数据集具有高度稀疏性和巨大的时间一致性。此外,由于笔划的时间顺序信息,该数据集为在少数镜头学习领域开发SNNs算法提供了强大的挑战和合适的基准。我们还提供了改进的最近邻、卷积网络、连体网和元学习算法,用于验证。

英文摘要 Few-shot learning (learning with a few samples) is one of the most important capacities of the human brain. However, the current artificial intelligence systems meet difficulties in achieving this ability, so as the biologically plausible spiking neural networks (SNNs). Datasets for traditional few-shot learning domains provide few amounts of temporal information. And the absence of the neuromorphic datasets has hindered the development of few-shot learning for SNNs. Here, we provide the first neuromorphic dataset: N-Omniglot, using the Dynamic Vision Sensor (DVS). It contains 1623 categories of handwritten characters, with only 20 samples per class. N-Omniglot eliminates the need for a neuromorphic dataset for SNNs with high spareness and tremendous temporal coherence. Additionally, the dataset provides a powerful challenge and a suitable benchmark for developing SNNs algorithm in the few-shot learning domain due to the chronological information of strokes. We also provide the improved nearest neighbor, convolutional network, SiameseNet, and meta-learning algorithm in spiking version for verification.
邮件日期 2021年12月28日

312、向强大的深脉冲神经网络推进剩余学习

  • Advancing Residual Learning towards Powerful Deep Spiking Neural Networks 时间:2021年12月23日 第一作者:Yifan Hu 链接.
邮件日期 2021年12月24日

311、深度神经网络可以转换为超低延迟脉冲神经网络吗?

  • Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking Neural Networks? 时间:2021年12月22日 第一作者:Gourav Datta 链接.

摘要:脉冲神经网络(SNN)通过随时间分布的二进制脉冲进行操作,已成为资源受限设备的一种有前途的节能ML范式。然而,目前最先进的(SOTA)SNN需要多个时间步才能达到可接受的推理精度,从而增加峰值活动,从而增加能耗。SNN的SOTA训练策略涉及非脉冲深度神经网络(DNN)的转换。在本文中,我们确定SOTA转换策略不能产生超低延迟,因为它们错误地假设DNN和SNN预激活值是均匀分布的。我们提出了一种新的训练算法,能够准确地捕获这些分布,最小化DNN和转换后的SNN之间的误差。由此产生的SNN具有超低延迟和高激活稀疏性,从而显著提高了计算效率。特别是,我们在几个VGG和ResNet体系结构上对CIFAR-10和CIFAR-100数据集的图像识别任务评估了我们的框架。与iso体系结构标准DNN相比,我们在CIFAR-100数据集上仅使用2个时间步长就获得了64.19%的顶级精度,计算能量降低了约159.2倍。与其他SOTA SNN模型相比,我们的模型推理速度快2.5-8倍(即,时间步长更少)。

英文摘要 Spiking neural networks (SNNs), that operate via binary spikes distributed over time, have emerged as a promising energy efficient ML paradigm for resource-constrained devices. However, the current state-of-the-art (SOTA) SNNs require multiple time steps for acceptable inference accuracy, increasing spiking activity and, consequently, energy consumption. SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN). In this paper, we determine that SOTA conversion strategies cannot yield ultra low latency because they incorrectly assume that the DNN and SNN pre-activation values are uniformly distributed. We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN. The resulting SNNs have ultra low latency and high activation sparsity, yielding significant improvements in compute efficiency. In particular, we evaluate our framework on image recognition tasks from CIFAR-10 and CIFAR-100 datasets on several VGG and ResNet architectures. We obtain top-1 accuracy of 64.19% with only 2 time steps on the CIFAR-100 dataset with ~159.2x lower compute energy compared to an iso-architecture standard DNN. Compared to other SOTA SNN models, our models perform inference 2.5-8x faster (i.e., with fewer time steps).
注释 Accepted to DATE 2022
邮件日期 2021年12月23日

310、通过时间向前传播实现动态脉冲神经网络的精确在线训练

  • Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time 时间:2021年12月20日 第一作者:Bojian Yin 链接.

摘要:大脑中脉冲神经元之间的事件驱动和稀疏通信特性为灵活高效的人工智能带来了巨大的希望。学习算法的最新进展表明,与标准的递归神经网络相比,脉冲神经元的递归网络可以有效地训练,以获得具有竞争力的性能。尽管如此,由于这些学习算法使用时间误差反向传播(BPTT),它们的内存需求很高,训练速度很慢,并且与在线学习不兼容。这限制了这些学习算法在相对较小的网络和有限的时间序列长度上的应用。已经提出了具有较低计算和内存复杂度的BPTT在线近似(e-prop,OSTL),但在实践中也受到内存限制,并且作为近似,其性能并不优于标准BPTT训练。在这里,我们展示了一种新开发的BPTT替代方案,即通过时间的前向传播(FPTT)如何应用于脉冲神经网络。与BPTT不同,FPTT试图最大限度地降低持续的动态规范化损失风险。因此,FPTT可以在线计算,并且相对于序列长度具有固定的复杂性。结合一种新的动态脉冲神经元模型——液体时间常数神经元,我们证明了用FPTT训练的SNN优于在线BPTT近似,在时间分类任务上接近或超过离线BPTT精度。因此,这种方法可以在长序列上以对记忆友好的在线方式训练SNN,并将SNN扩展到新颖复杂的神经结构。

英文摘要 The event-driven and sparse nature of communication between spiking neurons in the brain holds great promise for flexible and energy-efficient AI. Recent advances in learning algorithms have demonstrated that recurrent networks of spiking neurons can be effectively trained to achieve competitive performance compared to standard recurrent neural networks. Still, as these learning algorithms use error-backpropagation through time (BPTT), they suffer from high memory requirements, are slow to train, and are incompatible with online learning. This limits the application of these learning algorithms to relatively small networks and to limited temporal sequence lengths. Online approximations to BPTT with lower computational and memory complexity have been proposed (e-prop, OSTL), but in practice also suffer from memory limitations and, as approximations, do not outperform standard BPTT training. Here, we show how a recently developed alternative to BPTT, Forward Propagation Through Time (FPTT) can be applied in spiking neural networks. Different from BPTT, FPTT attempts to minimize an ongoing dynamically regularized risk on the loss. As a result, FPTT can be computed in an online fashion and has fixed complexity with respect to the sequence length. When combined with a novel dynamic spiking neuron model, the Liquid-Time-Constant neuron, we show that SNNs trained with FPTT outperform online BPTT approximations, and approach or exceed offline BPTT accuracy on temporal classification tasks. This approach thus makes it feasible to train SNNs in a memory-friendly online fashion on long sequences and scale up SNNs to novel and complex neural architectures.
注释 12 pages, 4 figures
邮件日期 2021年12月22日

309、基于事件视觉的脉冲卷积网络对抗攻击

  • Adversarial Attacks on Spiking Convolutional Networks for Event-based Vision 时间:2021年12月20日 第一作者:Julian B"uchel 链接.
注释 16 pages, preprint, submitted to ICLR 2022
邮件日期 2021年12月21日

308、平衡态隐式微分训练反馈脉冲神经网络

  • Training Feedback Spiking Neural Networks by Implicit Differentiation on the Equilibrium State 时间:2021年12月17日 第一作者:Mingqing Xiao 链接.
注释 Accepted by NeurIPS 2021 (Spotlight)
邮件日期 2021年12月21日

307、脉冲相机的光流估计

  • Optical Flow Estimation for Spiking Camera 时间:2021年12月17日 第一作者:Liwen Hu 链接.
注释 The first two authors contributed equally
邮件日期 2021年12月20日

306、向强大的深脉冲神经网络推进剩余学习

  • Advancing Residual Learning towards Powerful Deep Spiking Neural Networks 时间:2021年12月15日 第一作者:Yifan Hu 链接.

摘要:尽管神经形态计算发展迅速,但脉冲神经网络(SNN)的容量和表示能力不足严重限制了其实际应用范围。剩余学习和捷径已被证明是训练深层神经网络的一种重要方法,但以前的工作很少评估它们对基于棘波的通信和时空动力学特征的适用性。在本文中,我们首先发现,这种疏忽导致阻碍信息流,并伴随着退化问题,在以前的剩余SNN。然后,我们提出了一种新的面向SNN的残差块MS ResNet,它能够显著扩展直接训练的SNN的深度,例如,在CIFAR-10上可以扩展到482层,在ImageNet上可以扩展到104层,而不会观察到任何轻微的退化问题。我们在基于帧和神经形态的数据集上验证了MS-ResNet104的有效性,MS-ResNet104在ImageNet上获得了76.02%的准确率,这是在直接训练SNN领域中的首次。我们还观察到,平均每个神经元只需要一个脉冲就可以对输入样本进行分类,这具有很高的能量效率。我们相信,我们强大且可扩展的模型将为SNN的进一步开发提供强大支持。

英文摘要 Despite the rapid progress of neuromorphic computing, inadequate capacity and insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice. Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks, but rarely did previous work assess their applicability to the characteristics of spike-based communication and spatiotemporal dynamics. In this paper, we first identify that this negligence leads to impeded information flow and accompanying degradation problem in previous residual SNNs. Then we propose a novel SNN-oriented residual block, MS-ResNet, which is able to significantly extend the depth of directly trained SNNs, e.g. up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem. We validate the effectiveness of MS-ResNet on both frame-based and neuromorphic datasets, and MS-ResNet104 achieves a superior result of 76.02% accuracy on ImageNet, the first time in the domain of directly trained SNNs. Great energy efficiency is also observed that on average only one spike per neuron is needed to classify an input sample. We believe our powerful and scalable models will provide a strong support for further exploration of SNNs.
邮件日期 2021年12月17日

305、利用生物神经元和突触进行规划

  • Planning with Biological Neurons and Synapses 时间:2021年12月15日 第一作者:Francesco d'Amore 链接.

摘要:我们重新讨论了块世界中的规划问题,并为此任务实现了一个已知的启发式方法。重要的是,我们的实现在生物学上是合理的,因为它完全是通过神经元的脉冲来实现的。尽管在过去的五十年里,区块世界已经取得了很多成就,但我们相信这是同类算法中的第一个。输入是编码初始块堆栈集和目标集的符号序列,输出是运动命令序列,如“将顶部块放入表上堆栈1”。该程序是在汇编演算中编写的,汇编演算是最近提出的一种计算框架,旨在通过弥合神经活动和认知功能之间的差距来模拟大脑中的计算。它的基本对象是神经元的集合(稳定的神经元集合,它们的同时放电意味着主体正在思考一个对象、概念、单词等),它的命令包括投射和合并,它的执行模型基于广泛接受的神经科学原理。这个框架中的一个程序基本上建立了一个神经元和突触的动态系统,最终以很高的概率完成了任务。这项工作的目的是从经验上证明,汇编演算中合理的大型程序能够正确可靠地执行;而这种相当现实的——如果理想化的话——更高的认知功能,比如街区世界的规划,可以通过这样的程序成功地实现。

英文摘要 We revisit the planning problem in the blocks world, and we implement a known heuristic for this task. Importantly, our implementation is biologically plausible, in the sense that it is carried out exclusively through the spiking of neurons. Even though much has been accomplished in the blocks world over the past five decades, we believe that this is the first algorithm of its kind. The input is a sequence of symbols encoding an initial set of block stacks as well as a target set, and the output is a sequence of motion commands such as ``put the top block in stack 1 on the table''. The program is written in the Assembly Calculus, a recently proposed computational framework meant to model computation in the brain by bridging the gap between neural activity and cognitive function. Its elementary objects are assemblies of neurons (stable sets of neurons whose simultaneous firing signifies that the subject is thinking of an object, concept, word, etc.), its commands include project and merge, and its execution model is based on widely accepted tenets of neuroscience. A program in this framework essentially sets up a dynamical system of neurons and synapses that eventually, with high probability, accomplishes the task. The purpose of this work is to establish empirically that reasonably large programs in the Assembly Calculus can execute correctly and reliably; and that rather realistic -- if idealized -- higher cognitive functions, such as planning in the blocks world, can be implemented successfully by such programs.
邮件日期 2021年12月16日

304、全脉冲变分自动编码器

  • [ ]

About

this repository cord my subscriptions in arxiv with spiking neural network, and [this](https://github.com/shenhaibo123/SNN_summaries) is my summaries.

License:GNU General Public License v3.0