English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Can detection of extraneous visual signals reveal the syntactic structure of sign language?

Trettenbrein, P., Maran, M., Pendzich, N.-K., Pohl, J., Finkbeiner, T. A., Friederici, A. D., et al. (2022). Can detection of extraneous visual signals reveal the syntactic structure of sign language?. Talk presented at Workshop on “Visual Communication: New Theoretical and Empirical Developments”, Annual Conference of the German Linguistic Society (DGfS). Tübingen, Germany. 2022-02-23 - 2022-02-25.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Trettenbrein, Patrick1, 2, Author           
Maran, Matteo1, 2, Author           
Pendzich, Nina-Kristin3, Author
Pohl, Jan1, 4, Author
Finkbeiner, Thomas A.3, Author
Friederici, Angela D.1, Author           
Zaccarella, Emiliano1, Author           
Affiliations:
1Department Neuropsychology, MPI for Human Cognitive and Brain Sciences, Max Planck Society, Leipzig, DE, ou_634551              
2International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, MPI for Human Cognitive and Brain Sciences, Max Planck Society, Leipzig, DE, ou_2616696              
3SignLab, Georg-August University Göttingen, Göttingen, DE, ou_persistent22              
4University of Potsdam, Potsdam, DE, ou_persistent22              

Content

show
hide
Free keywords: Sign language; Syntax; Constituent structure; Psycholinguistics
 Abstract: Background

The ability to combine individual lexical items into phrases and sentences is at the core of the human capacity for language (Friederici et al., 2017). Linguistic research indicates that the world’s sign languages exhibit complex hierarchical organisation of utterances just like spoken languages (e.g., Cecchetto, 2017), but the role of hierarchical organisation during online sign language processing is poorly understood. The present study constitutes the first adaptation of the classical psycholinguistic “click” paradigm (e.g., Holmes & Forster, 1970) from the auditory-oral to the visuo-spatial modality. Using short flashes inserted into videos of signed sentences as analogues to auditory clicks, we seek to determine whether deaf signers, like hearing speakers, automatically attribute constituent structure onto sequences of signs during language comprehension.

Methods

The paradigm is implemented as an automated reaction-time experiment which can comfortably be run by deaf participants from home via their web browser. Instructions are given in German Sign Language (DGS) in the form of pre-recorded videos. In the experiment, participants watch different types of complex DGS sentences such as (1).

(1) IF POSS1 SISTER WITH POSS3 CHILD++ TOMORROW MORNING 3VISIT1 / IX1 HAVE-TO HOUSE CLEAN

During the presentation of sentences, a white flash (duration: 80 ms) may occur as an overlay to the stimulus clip at different positions in the sentence and participants have to respond to this cue as fast as possible via button press. After every trial, participants have to answer a binary comprehension question (Figure 1). The flash can occur in the first or second half of the sentence. Importantly, the exact point in time when the flash occurs differs with regard to the syntactic structure of a sentence, so that a flash may occur either after a major break in the constituent structure separating two clauses as indicated by “/” in (1), after a minor break, or not at a break. This yields a 2x3 within-subject design with the factors Position (first vs. second half) and Structure (major vs. minor vs. no break). In addition to the six experimental conditions, filler trials (22 %) in which no flash occurs were also included. The stimuli were designed and recorded with a deaf native signer. All clips were annotated using ELAN (Lausberg & Sloeties, 2009) and flashes were inserted using an automated video-editing procedure. In addition, we performed automated motion-tracking on the stimuli using OpenPose (Cao et al., 2019) and extracted motion information using OpenPoseR (Trettenbrein & Zaccarella, 2021) to control our stimuli for a possible correlation between articulatory pauses and the probed constituent structure.

Discussion

At the time of writing, data collection is still ongoing which is why we will limit our discussion here to the effects we expect to observe. Assuming that the placement of flashes at different positions in the constituent structure of sentences will impact the time that participants take to respond, we expect to observe a main effect of Structure. In particular, faster RTs are expected for detecting a flash at a major (no constituent interrupted) and minor (small number of constituents interrupted) boundaries, compared to the no boundary condition (large number of constituents interrupted). This would provide first psycholinguistic evidence for the relevance of constituent structure during sign language comprehension, expanding previous findings for spoken language. We do not expect to observe a main effect of Position, due to the inclusion of filler trials without any flashes which should counteract the increased probability of requiring a response towards the second half of the sentence, which was inherent to the design of earlier auditory studies (Holmes & Forster, 1970). In sum, the expected effect of Structure would provide evidence for the modality-independence of the cognitive mechanisms underlying syntactic processing.

Details

show
hide
Language(s): eng - English
 Dates: 2022-02-24
 Publication Status: Not specified
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: Workshop on “Visual Communication: New Theoretical and Empirical Developments”, Annual Conference of the German Linguistic Society (DGfS)
Place of Event: Tübingen, Germany
Start-/End Date: 2022-02-23 - 2022-02-25

Legal Case

show

Project information

show

Source

show