skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

MaskDGA: An Evasion Attack Against DGA Classifiers and Adversarial Defenses

IEEE access, 2020, Vol.8, p.161580-161592 [Peer Reviewed Journal]

Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 ;ISSN: 2169-3536 ;EISSN: 2169-3536 ;DOI: 10.1109/ACCESS.2020.3020964 ;CODEN: IAECCG

Full text available

Citations Cited by
  • Title:
    MaskDGA: An Evasion Attack Against DGA Classifiers and Adversarial Defenses
  • Author: Sidi, Lior ; Nadler, Asaf ; Shabtai, Asaf
  • Subjects: Adversarial learning ; Algorithms ; Botnet ; botnets ; Classifiers ; Command and control ; Computational modeling ; Computer architecture ; deep learning ; DGA ; Distillation ; Domain names ; Evaluation ; Machine learning ; Neural networks ; Real-time systems ; Retraining ; Robustness ; State-of-the-art reviews ; Training
  • Is Part Of: IEEE access, 2020, Vol.8, p.161580-161592
  • Description: Domain generation algorithms (DGAs) are commonly used by botnets to generate domain names that bots can use to establish communication channels with their command and control servers. Recent publications presented deep learning classifiers that detect algorithmically generated domain (AGD) names in real time with high accuracy and thus significantly reduce the effectiveness of DGAs for botnet communication. In this paper, we present MaskDGA, an evasion technique that uses adversarial learning to modify AGD names in order to evade inline DGA classifiers, without the need for the attacker to possess any knowledge about the DGA classifier's architecture or parameters. MaskDGA was evaluated on four state-of-the-art DGA classifiers and outperformed the recently proposed CharBot and DeepDGA evasion techniques. We also evaluated MaskDGA on enhanced versions of the same classifiers equipped with common adversarial defenses (distillation and adversarial retraining). While the results show that adversarial retraining has some limited effectiveness against the evasion technique, it is clear that a more resilient detection mechanism is required. We also propose an extension to MaskDGA that allows an attacker to omit a subset of the modified AGD names based on the classification results of the attacker's trained model, in order to achieve a desired evasion rate.
  • Publisher: Piscataway: IEEE
  • Language: English
  • Identifier: ISSN: 2169-3536
    EISSN: 2169-3536
    DOI: 10.1109/ACCESS.2020.3020964
    CODEN: IAECCG
  • Source: IEEE Open Access Journals
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait