Description
Today's large language models excel at generating eloquent texts, but fall short when it comes to human-like logic and pragmatic reasoning. This phenomenon is to a large extent ascribable to the fundamental differences between how large language models are constructed and how human languages are acquired. Indeed, humans acquire their native languages by actively taking part in communicative interactions that are grounded in their environment, while large language models take as input purely textual data. I will show that models learnt under these conditions build up knowledge of a fundamentally different kind, and argue that simulating the conditions under which human languages emerge and evolve will be essential to overcome the limitations of current NLP technologies.Période | 9 nov. 2023 |
---|---|
Titre de l'événement | BNAIC/Benelearn 2023: JOINT INTERNATIONAL SCIENTIFIC CONFERENCES ON AI AND MACHINE LEARNING |
Type d'événement | Une conférence |
Emplacement | Delft, Pays-BasAfficher sur la carte |
Degré de reconnaissance | International |
Contenu connexe
-
Projets
-
Meaning and understanding in human-centric AI
Projet: Recherche