hide
Free keywords:
-
Abstract:
Whether it is listening to a piece of music, learning a new language, or solving a mathematical equation, people often acquire abstract notions in the sense of motifs and variables — manifested in music, grammatical categories, or mathematical symbols. How do we create abstract representations of sequences? Are these abstract representations useful for memory recall? In addition to learning transition probabilities, chunking, and tracking ordinal positions, we propose that humans also use abstractions to arrive at efficient sequence representations. We propose and study two abstraction categories: projectional motifs and variable motifs. Projectional motifs find a common theme underlying distinct sequence instances. Variable motifs define symbols manifested in varying instances. We show that both motif categories help a model to reduce sequence representation complexity via encoding sequences in an abstract space, thereby facilitating the model to learn more efficiently and transfer to novel sequences. In two sequence recall experiments, we train subjects to remember sequences with projectional and variable motifs, respectively, and examine whether motif training benefits the recall of unseen novel sequences sharing the same motif. Our result suggests that training variables and projectional motifs improve recall accuracy, specifically on transfer lists but not randomly created control lists relative to independent control groups. Our study suggests that humans construct efficient sequential memory representations according to the two types of abstraction we propose, and it shows that creating these abstractions benefits learning and out-of-distribution transfer. Our study paves the way for a deeper understanding of human abstraction learning and generalization.