Haoran Zhao
I am a master's student in Computational Linguistics at the University of Washington.
Previously, I was an undergraduate student at Drexel University, where I was fortunately advised by Jake Williams. Before Drexel, I spent two amazing years at Lanzhou University in China, where I very much enjoyed the first two years of my college life.
I also spent some amazing time at the Computation & Cognition Lab and Causality in Cognition Lab at Stanford, where I worked with Noah Goodman and Tobias Gerstenberg and got to know a lot of amazing people who shaped my current research focus.
I am interested in social language use, communication, and AI (social cognition broadly). I view language primarily as a communication tool. Some of the questions I am currently thinking of -- How do humans interpret and produce language in social contexts (Pragmatics)? How does language interplay with humans' various social cognitive tasks (for example, how does language help us reason)? I study these problems through both computational and behavioral methods. I am currently exploring these questions with Max Kleiman-Weiner and Robert Hawkins.
Email /
CV /
Google Scholar /
Twitter (X) /
Github
|
 |
News
- August 2025: Attending Brains, Minds, and Machines summer course @ Woods Hole, MA !
- July 2025: Attending my first CogSci at San Francisco!
- July 2025: Ran my first ever Full Marathon at San Francisco!
- May 2025: Got accepted to COSMOS summer school in Tokyo this September!
- April 2025: Two paper got accepted to CogSci 2025!
- April 2025: Got accepted to the MIT CBMM summer school this August!
|
Research
|
Comparing human and LLM politeness strategies in free production
Haoran Zhao, Robert Hakwins
arXiv, 2025 (follow-up on CogSci 2025 Politeness Speech Generation)
|
Polite Speech Generation in Humans and Language Models
Haoran Zhao, Robert Hakwins
CogSci, 2025
|
Non-literal Understanding of Number Words by Language Models
Polina Tsvilodub*, Kanishk Gandhi*, Haoran Zhao*, Jan-Philipp Franken, Michael Franke, Noah Goodman
CogSci, 2025 (* denotes equal contribution)
arXiv
|
Large Language Models are Not Inverse Thinkers Quite yet
Haoran Zhao
ICML Workshop on LLMs and Cognition, 2024
Paper Link
|
Bit Cipher -- A Simple yet Powerful Word Representation System
Haoran Zhao, Jake Ryland Williams
arXiv, 2023
arXiv
|
Explicit Foundation Model Optimization with Self-Attentive Feed-Forward Neural Units
Jake Ryland Williams, Haoran Zhao
arXiv, 2023
arXiv
|
Reducing the Need for Backpropagation and Discovering Better Optima With Explicit Optimizations of Neural Networks
Jake Ryland Williams, Haoran Zhao
arXiv, 2023
arXiv
|
Optimizing Named Entity Recognition for Improving Logical Formulae Abstraction from Technical Requirements Documents.
Alexander Perko,Haoran Zhao, Franz Wotawa
The 10th International Conference on Dependable Systems and Their Applications (DSA-2023), 2023
|
Prompt Design and Answer Processing for Knowledge Base Construction from Pre-trained Language Models (KBC-LM)
Xiao Fang, Alex Kalinowski, Haoran Zhao, Ziao You, Yuhao Zhang, Yuan An
Challenge @ 21st International Semantic Web Conference (ISWC 2022) CEUR Workshop Proceedings, 2022
|
Last Updated: April 28th 2025
Adapted from: GitHub
|
|