Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Bootstrapping meaning through listening: Unsupervised learning of spoken sentence embeddings

Zhu, J., Tian, Z., Liu, Y., Zhang, C., & Lo, C. (2022). Bootstrapping meaning through listening: Unsupervised learning of spoken sentence embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2022.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Konferenzbeitrag

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Zhu, Jian1, Autor
Tian, Zuoyu2, Autor
Liu, Yadong3, Autor
Zhang, Cong4, Autor
Lo, Chiawen5, Autor           
Affiliations:
1University of Michigan, ou_persistent22              
2University of Indiana, Bloomington, ou_persistent22              
3University of British Columbia, ou_persistent22              
4Newcastle University, ou_persistent22              
5Max Planck Research Group Language Cycles, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_3025666              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Inducing semantic representations directly from speech signals is challenging, but the task has many useful applications for speech mining and spoken language understanding. This study tackles the unsupervised learning of semantic representations for spoken utterances. Through converting speech signals into hidden units generated from acoustic unit discovery, we propose WavEmbed, a multimodal sequential autoencoder that predicts hidden units from

a dense representation of speech. Secondly, we also propose S-HuBERT to induce meaning through knowledge distillation, in which

a sentence embedding model is first trained on hidden units and passes its knowledge to a speech encoder through contrastive learning. The best performing model achieves a moderate correlation (0.5∼0.6) with human judgments, without relying on any labels or transcriptions. Furthermore, these models can also be easily extended to leverage textual transcriptions of speech to learn much better speech embeddings that are strongly correlated with human annotations. Our proposed methods are applicable to

the development of purely data-driven systems for speech mining, indexing and search.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2022-10-082022-12
 Publikationsstatus: Online veröffentlicht
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: -
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: The 2022 Conference on Empirical Methods in Natural Language Processing
Veranstaltungsort: Abu Dhabi
Start-/Enddatum: 2022-12-07 - 2022-12-11

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Findings of the Association for Computational Linguistics: EMNLP 2022
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: - Identifikator: -