skip to main content
Giới hạn tìm kiếm: Giới hạn tìm kiếm: Dạng tài nguyên Hiển thị kết quả với: Hiển thị kết quả với: Dạng tìm kiếm Chỉ mục

Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

Frontiers in neuroscience, 2017-06, Vol.11, p.324-324 [Tạp chí có phản biện]

2017. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;Copyright © 2017 Neftci, Augustine, Paul and Detorakis. 2017 Neftci, Augustine, Paul and Detorakis ;ISSN: 1662-4548 ;ISSN: 1662-453X ;EISSN: 1662-453X ;DOI: 10.3389/fnins.2017.00324 ;PMID: 28680387

Tài liệu số/Tài liệu điện tử

Trích dẫn Trích dẫn bởi
  • Nhan đề:
    Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
  • Tác giả: Neftci, Emre O ; Augustine, Charles ; Paul, Somnath ; Detorakis, Georgios
  • Chủ đề: Artificial intelligence ; Back propagation ; backpropagation algorithm ; Brain research ; Computers ; Deep learning ; Distance learning ; embedded cognition ; Feedback ; feedback alignment ; Firing pattern ; International conferences ; Learning algorithms ; Machine learning ; Memory ; Neural networks ; Neurons ; Neuroscience ; Neurosciences ; Signal processing ; spiking neural networks ; stochastic processes ; Synaptic plasticity ; Synaptic strength
  • Là 1 phần của: Frontiers in neuroscience, 2017-06, Vol.11, p.324-324
  • Mô tả: An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
  • Nơi xuất bản: Switzerland: Frontiers Research Foundation
  • Ngôn ngữ: English
  • Số nhận dạng: ISSN: 1662-4548
    ISSN: 1662-453X
    EISSN: 1662-453X
    DOI: 10.3389/fnins.2017.00324
    PMID: 28680387
  • Nguồn: TestCollectionTL3OpenAccess
    GFMER Free Medical Journals
    PubMed Central
    ProQuest Central

Đang tìm Cơ sở dữ liệu bên ngoài...