English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

General cross-architecture distillation of pretrained language models into matrix embedding

MPS-Authors
/persons/resource/persons268437

Galke,  Lukas
Language and Genetics Department, MPI for Psycholinguistics, Max Planck Society;
Language Evolution and Adaptation in Diverse Situations (LEADS), MPI for Psycholinguistics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Galke, L., Cuber, I., Meyer, C., Nölscher, H. F., Sonderecker, A., & Scherp, A. (2022). General cross-architecture distillation of pretrained language models into matrix embedding. Talk presented at the IEEE Joint Conference on Neural Networks (IJCNN 2022), part of the IEEE World Congress on Computational Intelligence (WCCI 2022). Padua, Italy. 2022-07-18 - 2022-07-23.


Cite as: https://hdl.handle.net/21.11116/0000-000A-F01C-8
Abstract
There is no abstract available