English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Automated extraction of speech and turn-taking parameters in autism allows for diagnostic classification using a multivariable prediction model

MPS-Authors
/persons/resource/persons261980

Koutsouleris,  N.
Max Planck Fellow Group Precision Psychiatry, Max Planck Institute of Psychiatry, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Plank, I. S., Koehler, J. C., Nelson, A. M., Koutsouleris, N., & Falter-Wagner, C. M. (2023). Automated extraction of speech and turn-taking parameters in autism allows for diagnostic classification using a multivariable prediction model. FRONTIERS IN PSYCHIATRY, 14: 1257569. doi:10.3389/fpsyt.2023.1257569.


Cite as: https://hdl.handle.net/21.11116/0000-000E-076D-1
Abstract
Autism spectrum disorder (ASD) is diagnosed on the basis of speech and communication differences, amongst other symptoms. Since conversations are essential for building connections with others, it is important to understand the exact nature of differences between autistic and non-autistic verbal behaviour and evaluate the potential of these differences for diagnostics. In this study, we recorded dyadic conversations and used automated extraction of speech and interactional turn-taking features of 54 non-autistic and 26 autistic participants. The extracted speech and turn-taking parameters showed high potential as a diagnostic marker. A linear support vector machine was able to predict the dyad type with 76.2% balanced accuracy (sensitivity: 73.8%, specificity: 78.6%), suggesting that digitally assisted diagnostics could significantly enhance the current clinical diagnostic process due to their objectivity and scalability. In group comparisons on the individual and dyadic level, we found that autistic interaction partners talked slower and in a more monotonous manner than non-autistic interaction partners and that mixed dyads consisting of an autistic and a non-autistic participant had increased periods of silence, and the intensity, i.e. loudness, of their speech was more synchronous.