skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Random feedback alignment algorithms to train neural networks: why do they align?

Machine learning: science and technology, 2024-06, Vol.5 (2), p.025023 [Peer Reviewed Journal]

2024 The Author(s). Published by IOP Publishing Ltd. This work is published under http://creativecommons.org/licenses/by/4.0 (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;EISSN: 2632-2153 ;DOI: 10.1088/2632-2153/ad3ee5

Full text available

Citations Cited by
  • Title:
    Random feedback alignment algorithms to train neural networks: why do they align?
  • Author: Chu, Dominique ; Bacho, Florian
  • Subjects: Algorithms ; Alignment ; Artificial neural networks ; Back propagation networks ; Feedback ; feedback alignment ; Fixed points (mathematics) ; Machine learning ; Neural networks ; Random walk ; Stability criteria
  • Is Part Of: Machine learning: science and technology, 2024-06, Vol.5 (2), p.025023
  • Description: Feedback alignment algorithms are an alternative to backpropagation to train neural networks, whereby some of the partial derivatives that are required to compute the gradient are replaced by random terms. This essentially transforms the update rule into a random walk in weight space. Surprisingly, learning still works with those algorithms, including training of deep neural networks. The performance of FA is generally attributed to an alignment of the update of the random walker with the true gradient—the eponymous gradient alignment—which drives an approximate gradient descent. The mechanism that leads to this alignment remains unclear, however. In this paper, we use mathematical reasoning and simulations to investigate gradient alignment. We observe that the feedback alignment update rule has fixed points, which correspond to extrema of the loss function. We show that gradient alignment is a stability criterion for those fixed points. It is only a necessary criterion for algorithm performance. Experimentally, we demonstrate that high levels of gradient alignment can lead to poor algorithm performance and that the alignment is not always driving the gradient descent.
  • Publisher: Bristol: IOP Publishing
  • Language: English
  • Identifier: EISSN: 2632-2153
    DOI: 10.1088/2632-2153/ad3ee5
  • Source: ProQuest Central
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait