skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Compositionality Decomposed: How do Neural Networks Generalise?

The Journal of artificial intelligence research, 2020-04, Vol.67, p.757-795 [Peer Reviewed Journal]

2020. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the associated terms available at https://www.jair.org/index.php/jair/about ;ISSN: 1076-9757 ;EISSN: 1943-5037 ;DOI: 10.1613/jair.1.11674

Full text available

Citations Cited by
  • Title:
    Compositionality Decomposed: How do Neural Networks Generalise?
  • Author: Hupkes, Dieuwke ; Dankers, Verna ; Mul, Mathijs ; Bruni, Elia
  • Subjects: Artificial intelligence ; Convolution ; Empirical analysis ; Neural networks ; Training
  • Is Part Of: The Journal of artificial intelligence research, 2020-04, Vol.67, p.757-795
  • Description: Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality of language and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests for models that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models’ composition operations are local or global (iv) if models’ predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution-based and a transformer model. We provide an in-depth analysis of the results, which uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.
  • Publisher: San Francisco: AI Access Foundation
  • Language: English
  • Identifier: ISSN: 1076-9757
    EISSN: 1943-5037
    DOI: 10.1613/jair.1.11674
  • Source: DOAJ : Directory of Open Access Journals
    Open Access: Freely Accessible Journals by multiple vendors
    Alma/SFX Local Collection
    ProQuest Central
    American Association for Artificial Intelligence publications

Searching Remote Databases, Please Wait