Research

I am a computational psycholinguist and experimental linguist, with research interests centered on the prosody-syntax interface.

My primary research interests focus on how acoustic and prosodic cues can help us disambiguate syntactic structures and semantic interpretations. I investigate how these cues influence the acceptability and processing of ambiguous/complex syntactic constructions.

More specifically, I’m interested in:

  • Punctuations
  • Korean ditransitive structures
  • Attachment ambiguity in English
  • Restrictive vs. non-restrictive clauses

Extending my research to AI, I examine how modern AI, particularly Text-to-speech (TTS) models, often fail to produce reliable prosodic cues. I study how naïve human listeners perceive such synthetic speech: what they find acceptable or unnatural, and where and why those perceptions shift. To address these issues, I also work on developing pipelines and datasets aimed at improving prosody in TTS systems.

My broader aim is to better understand how AI models diverge from or converge with human performance in processing ambiguous linguistic inputs, and how humans in turn perceive AI-generated language compared to naturally produced speech.

By applying linguistic insights, I seek to deepen our understanding of both human language processing and the limitations of current language technologies.