Variability in communication contexts determines the convexity of semantic category systems emerging in neural networks

Jul 1, 2024ยท
Vlad Nedelcu
,
Daniel Lassiter
,
Kenny Smith
ยท 0 min read
Abstract
Artificial neural networks trained using deep-learning methods to solve a simple reference game by optimizing a task-specific utility develop efficient semantic categorization systems that trade off complexity against informativeness, much like the category systems of human languages do. But what exact type of structures in the semantic space could result in efficient categories, and how are these structures shaped by the contexts of communication? We propose a NN model that moves beyond the minimal dyadic setup and show that the emergence of convexity, a property of semantic systems that facilitates this efficiency, is dependent on the amount of variability in communication contexts across partners. We use a method of input representation based on compositional vector embeddings that is able to achieve a higher level of communication success than regular non-compositional representation methods, and can achieve a better balance between maintaining the structure of the semantic space and optimizing utility.
Type
Publication
Proceedings of the Annual Meeting of the Cognitive Science Society, 46