English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

An Evaluation of Progressive Neural Networks for Transfer Learning in Natural Language Processing

MPS-Authors
/persons/resource/persons256208

Ghosh,  Mainak
MPI for Innovation and Competition, Max Planck Society;

Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Hagerer, G., Moeed, A., Dugar, S., Gupta, S., Ghosh, M., Danner, H., et al. (2020). An Evaluation of Progressive Neural Networks for Transfer Learning in Natural Language Processing. In Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) (pp. 1376-1381). Marseille.


Cite as: http://hdl.handle.net/21.11116/0000-0007-D5B4-D
Abstract
A major challenge in modern neural networks is the utilization of previous knowledge for new tasks in an effective manner, otherwise known as transfer learning. Fine-tuning, the most widely used method for achieving this, suffers from catastrophic forgetting. The problem is often exacerbated in natural language processing (NLP). In this work, we assess progressive neural networks (PNNs) as an alternative to fine-tuning. The evaluation is based on common NLP tasks such as sequence labeling and text classification. By gauging PNNs across a range of architectures, datasets, and tasks, we observe improvements over the baselines throughout all experiments.