Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or "grammars") according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial "grammars" (rule sets) at three key complexity levels. Because human linguistic syntax is classed as "mildly context-sensitive," we specifically included a visual grammar at this complexity level. Acquisition of these three grammars was tested in an artificial grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract visual patterns with no meaning.

Westphal-Fitch, G., Giustolisi, B., Cecchetto, C., Scott Martin, J., Fitch, W. (2018). Artificial grammar learning capabilities in an abstract visual task match requirements for linguistic syntax. FRONTIERS IN PSYCHOLOGY, 9(JUL) [10.3389/fpsyg.2018.01210].

Artificial grammar learning capabilities in an abstract visual task match requirements for linguistic syntax

Giustolisi, B;Cecchetto, C;
2018

Abstract

Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or "grammars") according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing in form and color. We constructed different sets of sequences, using artificial "grammars" (rule sets) at three key complexity levels. Because human linguistic syntax is classed as "mildly context-sensitive," we specifically included a visual grammar at this complexity level. Acquisition of these three grammars was tested in an artificial grammar-learning paradigm: after exposure to a set of well-formed strings, participants were asked to discriminate novel grammatical patterns from non-grammatical patterns. Participants successfully acquired all three grammars after only minutes of exposure, correctly generalizing to novel stimuli and to novel stimulus lengths. A Bayesian analysis excluded multiple alternative hypotheses and shows that the success in rule acquisition applies both at the group level and for most participants analyzed individually. These experimental results demonstrate rapid pattern learning for abstract visual patterns, extending to the mildly context-sensitive level characterizing language. We suggest that a formal equivalence of processing at the mildly context sensitive level in the visual and linguistic domains implies that cognitive mechanisms with the computational power to process linguistic syntax are not specific to the domain of language, but extend to abstract visual patterns with no meaning.
Articolo in rivista - Articolo scientifico
Artificial grammar learning; Formal language theory; Long-distance dependencies; Mildly context sensitive grammars; Working memory;
Artificial grammar learning; Formal language theory; Long-distance dependencies; Mildly context sensitive grammars; Working memory; Psychology (all)
English
2018
9
JUL
1210
open
Westphal-Fitch, G., Giustolisi, B., Cecchetto, C., Scott Martin, J., Fitch, W. (2018). Artificial grammar learning capabilities in an abstract visual task match requirements for linguistic syntax. FRONTIERS IN PSYCHOLOGY, 9(JUL) [10.3389/fpsyg.2018.01210].
File in questo prodotto:
File Dimensione Formato  
10281-209124.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 1.67 MB
Formato Adobe PDF
1.67 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/209124
Citazioni
  • Scopus 9
  • ???jsp.display-item.citation.isi??? 8
Social impact