Haoran Zhao
I am a master's student in Computational Linguistics at the University of Washington.
Previously, I was an undergraduate student at Drexel University, where I was fortunately advised by Jake Williams. Before Drexel, I spent my first two amazing years at Lanzhou University in China.
I also spent some amazing time at the Computation & Cognition Lab and Causality in Cognition Lab at Stanford, where I worked with Noah Goodman and Tobias Gerstenberg and got to know a lot of amazing people who shaped my current research focus.
I am interested in social communication, Pragmatics, and AI. I view language primarily as a communication tool.
On the human cognition side, pragmatic language use depends not only on conversational language contexts but also on the social dynamics between interlocutors, plus we communicate not merely to convey meaning but to express goals and intentions that enable flexible coordination and cooperation in social interactions. My research focuses on how these factors shape social language use and the cognitive mechanisms that support them.
On the machine side, my research focuses on human-AI communication, specifically on how to achieve pragmatic alignment between models and humans to ensure effective communication, and understanding how their language use adapts and co-evolves over time.
I study these problems through both computational and behavioral methods, and I am currently exploring these questions with Max Kleiman-Weiner and Robert Hawkins.
Email /
CV /
Google Scholar /
Twitter (X) /
Github
|
 |
News
- September 2025: Attending COSMOS @ Tokyo, Japan!
- August 2025: One paper got accepted to EMNLP 2025!
- August 2025: Attending Brains, Minds, and Machines summer course @ Woods Hole, MA!
- July 2025: Attending my first CogSci @ San Francisco!
- July 2025: Ran my first ever Full Marathon @ San Francisco!
- April 2025: Two papers got accepted to CogSci 2025!
|
Research
|
Comparing Human and Machine Communication Patterns through a Tangram Game
Haoran Zhao, Colin Conwell
NeurIPS Data on the Brain & Mind Workshop, 2025
|
Comparing human and LLM politeness strategies in free production
Haoran Zhao, Robert Hakwins
EMNLP, 2025   (Oral Presentation)
follow-up on CogSci 2025 Politeness Speech Generation
|
Polite Speech Generation in Humans and Language Models
Haoran Zhao, Robert Hakwins
CogSci, 2025
|
Non-literal Understanding of Number Words by Language Models
Polina Tsvilodub*, Kanishk Gandhi*, Haoran Zhao*, Jan-Philipp Franken, Michael Franke, Noah Goodman
CogSci, 2025 (* denotes equal contribution)
arXiv
|
Large Language Models are Not Inverse Thinkers Quite yet
Haoran Zhao
ICML Workshop on LLMs and Cognition, 2024
Paper Link
|
Bit Cipher -- A Simple yet Powerful Word Representation System
Haoran Zhao, Jake Ryland Williams
arXiv, 2023
arXiv
|
Explicit Foundation Model Optimization with Self-Attentive Feed-Forward Neural Units
Jake Ryland Williams, Haoran Zhao
arXiv, 2023
arXiv
|
Reducing the Need for Backpropagation and Discovering Better Optima With Explicit Optimizations of Neural Networks
Jake Ryland Williams, Haoran Zhao
arXiv, 2023
arXiv
|
Optimizing Named Entity Recognition for Improving Logical Formulae Abstraction from Technical Requirements Documents.
Alexander Perko,Haoran Zhao, Franz Wotawa
The 10th International Conference on Dependable Systems and Their Applications (DSA-2023), 2023
|
Prompt Design and Answer Processing for Knowledge Base Construction from Pre-trained Language Models (KBC-LM)
Xiao Fang, Alex Kalinowski, Haoran Zhao, Ziao You, Yuhao Zhang, Yuan An
Challenge @ 21st International Semantic Web Conference (ISWC 2022) CEUR Workshop Proceedings, 2022
|
Last Updated: April 28th 2025
Adapted from: GitHub
|
|