Designing Language Models to Think Like Humans

02 July 2024, 14:00 
zoom & Room 206 
Designing Language Models to Think Like Humans

Join us with Zoom

Designing Language Models to Think Like Humans

Dr. Chen Shani



While language models (LMs) show impressive text manipulation capabilities, they also lack commonsense and reasoning abilities and are known to be brittle. In this talk, I will suggest a different LMs design paradigm, inspired by how humans understand it. I will present two papers, both shedding light on human-inspired NLP architectures aimed at delving deeper into the meaning beyond words. The first paper accounts for the lack of commonsense and reasoning abilities by proposing a paradigm shift in language understanding, drawing inspiration from embodied cognitive linguistics (ECL). In this position paper we propose a new architecture that treats language as inherently executable, grounded in embodied interaction, and driven by metaphoric reasoning. The second paper shows that LMs are brittle and far from human performance in their concept-understanding and abstraction capabilities. We argue this is due to their token-based objectives, and implement a conceptaware post-processing manipulation, showing it matches human intuition better. We then pave the way for more concept-aware training paradigms.


Dr. Chen Shani is a post-doctoral researcher at Stanford's NLP group, collaborating with Prof. Dan Jurafsky. Previously, she pursued her Ph.D. at the Hebrew University under the guidance of Prof. Dafna Shahaf and worked at Amazon Research. Her focus lies at the intersection of humans and NLP, where she implements insights from human cognition to improve NLP systems.


• E-Mail:




Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing, Contact us as soon as possible >>