skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Adversarial Attacks Technology in Deep Learning Models

Journal of physics. Conference series, 2021-07, Vol.1966 (1), p.12007 [Peer Reviewed Journal]

Published under licence by IOP Publishing Ltd ;2021. This work is published under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 1742-6588 ;EISSN: 1742-6596 ;DOI: 10.1088/1742-6596/1966/1/012007

Full text available

Citations Cited by
  • Title:
    Adversarial Attacks Technology in Deep Learning Models
  • Author: Lai, Yucong ; Wang, Yifeng
  • Subjects: Computer vision ; Deep learning ; Machine learning ; Natural language processing ; Neural networks ; Perturbation ; Speech recognition
  • Is Part Of: Journal of physics. Conference series, 2021-07, Vol.1966 (1), p.12007
  • Description: Abstract Deep learning related to computer vision, speech recognition, and language processing has been developing rapidly over recent years. The applications of these models, however, have underlying risks. Recent studies have shown that small perturbation from adversarial examples could result in false interpretation of the neural network examples and false judgment. Therefore, understanding adversarial example technologies is essential for promoting the safety and robustness of neural network models. This paper summarizes current adversarial example technologies in different applications, discusses the current prospects and challenges, and envision potential future developments in related fields.
  • Publisher: Bristol: IOP Publishing
  • Language: English
  • Identifier: ISSN: 1742-6588
    EISSN: 1742-6596
    DOI: 10.1088/1742-6596/1966/1/012007
  • Source: IOP Publishing Free Content
    IOPscience (Open Access)
    GFMER Free Medical Journals
    ProQuest Central

Searching Remote Databases, Please Wait