English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Modular Growth of Hierarchical Networks: Efficient, General, and Robust Curriculum Learning

MPS-Authors
/persons/resource/persons289017

Hamidi,  M
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons263811

Giannakakis,  E       
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons276877

Schäfer,  T       
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons173580

Levina,  A       
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons192578

Wu,  CM       
Institutional Guests, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Hamidi, M., Khajehabdollahi, S., Giannakakis, E., Schäfer, T., Levina, A., & Wu, C. (2024). Modular Growth of Hierarchical Networks: Efficient, General, and Robust Curriculum Learning. In ALIFE 2024: Proceedings of the 2024 Artificial Life Conference.


Cite as: https://hdl.handle.net/21.11116/0000-000F-C09B-9
Abstract
Structural modularity is a pervasive feature of biological neural networks, which have been linked to several functional and computational advantages. Yet, the use of modular architectures in artificial neural networks has been relatively limited despite early successes. Here, we explore the performance and functional dynamics of a modular network trained on a memory task via an iterative growth curriculum. We find that for a given classical, non-modular recurrent neural network (RNN), an equivalent modular network will perform better across multiple metrics, including training time, generalizability, and robustness to some perturbations. We further examine how different aspects of a modular network’s connectivity contribute to its computational capability. We then demonstrate that the inductive bias introduced by the modular topology is strong enough for the network to perform well even when the connectivity within modules is fixed and only the connections between modules are trained. Our findings suggest that gradual modular growth of RNNs could provide advantages for learning increasingly complex tasks on evolutionary timescales, and help build more scalable and compressible artificial networks.