Do proficiency, context, event type, and language typology shape speech–gesture integration?
This project examines how bilingual speakers integrate speech and gesture across languages, with a focus on Persian (L1)–English (L2) bilinguals of varying proficiency levels compared to English monolinguals.
Our findings show that gesture remains sensitive to first-language influence even at high proficiency, that cognitively demanding contexts trigger reversion to L1 gesture patterns, and that bilingual–monolingual differences emerge primarily in spontaneous events.
Ongoing work explores cross-linguistic patterns in silent gesture across languages and contexts.
See the articles below:
Ghobadi, A., & Özçalışkan, Ş. (2025). Patterns of speech and gesture production in the communications of bilinguals and monolinguals: Do speaker proficiency and discourse context matter? Language and Cognition. In press.
Ghobadi, A., & Özçalışkan, Ş. (2025). Navigating through space in speech and gesture: Effects of speaker proficiency, language type, and event type. Brain and Language (Special Issue). In press.
Do proficiency and task complexity shape the influence of writing direction on gesture production?
This project examines how writing system directionality influences gesture production in a second language.
Comparing left-to-right (English) and right-to-left (Persian) writing systems, we show that writing direction shapes L2 gesture patterns, with effects that are modulated by speaker proficiency and task complexity. These findings highlight the interaction between literacy, cognition, and multimodal communication.
Do diagnosis and language type shape multimodal communication across languages?
This project examines cross-linguistic patterns of speech and gesture production in autistic and neurotypical children across English, French, Spanish, and German. Using a standardized narrative task, we show that group differences emerge in speech and gesture complexity—but not amount or diversity—and that these patterns are consistent across languages.
Ongoing work explores how bilingual experience further shapes multimodal communication in autism.
How can users’ linguistic and cognitive backgrounds inform personalized multimodal AI experiences?
This project applies insights from speech–gesture research and linguistic typology to the design of adaptive multimodal AI systems.
We examine how users’ linguistic backgrounds shape conceptual representations and explore how these differences can inform more personalized human–AI interactions.