Project 1: Multimodal Communication in Bilinguals Across Proficiency Levels
Is bilingualism best viewed as a category or as a continuum of proficiency and experience? Do bilinguals’ production patterns reflect the immediate communicative context, the type of event being described, or the structure of their first language? And as proficiency and immersion increase, do bilinguals converge toward monolingual-like patterns—not only in speech, but also in gesture?
To address these questions, we examine Persian–English, Chinese–English, and Spanish–English bilinguals across tasks, contexts, event types, proficiency levels, and modalities, focusing on how speech and gesture are coordinated across the proficiency spectrum.
Ghobadi, A., & Özçalışkan, Ş. (2026). Patterns of speech and gesture production in the communications of bilinguals and monolinguals: Do speakers’ proficiency and discourse context matter? Language and Cognition, 18, e19. doi:10.1017/langcog.2026.10071
Project 2: Cross-Linguistic Patterns of Multimodal Communication in Autism
Do autistic children differ from their neurotypical peers in how they produce and coordinate speech and gesture? If differences emerge, are they quantitative, qualitative, or both? Are these patterns driven primarily by diagnosis, or do they also reflect language structure and cross-linguistic variation? We also ask whether Theory of Mind contributes to gesture production, and whether gesture serves a compensatory role when speech is less explicit.
To address these questions, we examine autistic and neurotypical children speaking English, French, Spanish, and German. By comparing multimodal communication across languages, we aim to disentangle the roles of diagnosis, linguistic structure, and cognitive abilities in shaping communicative expression.
Project 3: Is AI Context-Sensitive?
Can large language models adapt their responses to subtle shifts in communicative context, task demands, and user intent? Or do they rely primarily on surface-level statistical patterns? This project investigates whether AI systems demonstrate genuine context sensitivity—adjusting meaning, tone, and structure appropriately across conversational, academic, and instructional settings.
We systematically manipulate context (e.g., audience, task framing, discourse history, and pragmatic constraints) to examine how AI models interpret and generate responses. By comparing AI outputs to human performance, we aim to identify where models succeed, where they fail, and what this reveals about computational versus human pragmatic competence.