English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

An Evaluation of Progressive Neural Networks for Transfer Learning in Natural Language Processing

MPS-Authors
/persons/resource/persons248942

Gosh,  Mainak
MPI for Innovation and Competition, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Hagerer, G., Moeed, A., Dugar, S., Gupta, S., Mitevski, O., Gosh, M., et al. (2020). An Evaluation of Progressive Neural Networks for Transfer Learning in Natural Language Processing. In Proceedings of the 12th Language Resources and Evaluation Conference (pp. 1376-1381). Marseille: European Language Resources Association.


Cite as: https://hdl.handle.net/21.11116/0000-0006-AC87-0
Abstract
A major challenge in modern neural networks is the utilization of previous knowledge for new tasks in an effective manner, otherwise known as transfer learning. Fine-tuning, the most widely used method for achieving this, suffers from catastrophic forgetting. The problem is often exacerbated in natural language processing (NLP). In this work, we assess progressive neural networks (PNNs) as an alternative to fine-tuning. The evaluation is based on common NLP tasks such as sequence labeling and text classification. By gauging PNNs across a range of architectures, datasets, and tasks, we observe improvements over the baselines throughout all experiments.