Ludwig Maximilian University of Munich, Germany
Date/Time: December 15, 2022, 18:30
Task descriptions are ubiquitous in human learning. They are usually accompanied by a few examples, but there is little human learning that is based on examples only. In contrast, the typical learning setup for NLP tasks lacks task descriptions and is supervised with 100s and often many more examples.
I will first give an update on our work on Pattern-Exploiting Training (PET). PET mimics human learning in that it leverages task descriptions in few-shot settings by exploiting the natural language understanding (NLU) capabilities of pre–trained language models (PLMs). I will show that PET is particularly promising in real-world few–shot settings.
The second part of the talk examines to what extent current PLMs exhibit true NLU. I will introduce CODA21, a new benchmark that we argue tests for true NLU. Finally, I will review our recent work on neurosymbolic models and their potential for NLU at human levels.
Bio: Hinrich Schütze is the Chair of Computational Linguistics and co-director of the Center for Language and Information Processing at LMU Munich. Ever since starting his PhD in the early 1990s, Hinrich’s research interests have been at the interface of linguistics, cognitive science, neural networks and computer science. Recent examples include learning with natural language instructions, multilingual representation learning for low-resource languages, computational morphology and neurosymbolic approaches.