Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Next state prediction gives rise to entangled, yet compositional representations of objects

Saanum, T., Schulze Buschoff, L., Dayan, P., & Schulz, E. (submitted). Next state prediction gives rise to entangled, yet compositional representations of objects.

Item is

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
https://arxiv.org/pdf/2410.04940 (beliebiger Volltext)
Beschreibung:
-
OA-Status:
Keine Angabe

Urheber

einblenden:
ausblenden:
 Urheber:
Saanum, T1, Autor           
Schulze Buschoff, LM, Autor
Dayan, P1, Autor                 
Schulz, E, Autor                 
Affiliations:
1Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_3017468              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Compositional representations are thought to enable humans to generalize across combinatorially vast state spaces. Models with learnable object slots, which encode information about objects in separate latent codes, have shown promise for this type of generalization but rely on strong architectural priors. Models with distributed representations, on the other hand, use overlapping, potentially entangled neural codes, and their ability to support compositional generalization remains underexplored. In this paper we examine whether distributed models can develop linearly separable representations of objects, like slotted models, through unsupervised training on videos of object interactions. We show that, surprisingly, models with distributed representations often match or outperform models with object slots in downstream prediction tasks. Furthermore, we find that linearly separable object representations can emerge without object-centric priors, with auxiliary objectives like next-state prediction playing a key role. Finally, we observe that distributed models' object representations are never fully disentangled, even if they are linearly separable: Multiple objects can be encoded through partially overlapping neural populations while still being highly separable with a linear classifier. We hypothesize that maintaining partially shared codes enables distributed models to better compress object dynamics, potentially enhancing generalization.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2024-10
 Publikationsstatus: Eingereicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.48550/arXiv.2410.04940
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: