日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Bootstrapping meaning through listening: Unsupervised learning of spoken sentence embeddings

MPS-Authors
/persons/resource/persons275094

Lo,  Chiawen
Max Planck Research Group Language Cycles, MPI for Human Cognitive and Brain Sciences, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Zhu, J., Tian, Z., Liu, Y., Zhang, C., & Lo, C. (2022). Bootstrapping meaning through listening: Unsupervised learning of spoken sentence embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2022.


引用: https://hdl.handle.net/21.11116/0000-000B-44E7-4
要旨
Inducing semantic representations directly from speech signals is challenging, but the task has many useful applications for speech mining and spoken language understanding. This study tackles the unsupervised learning of semantic representations for spoken utterances. Through converting speech signals into hidden units generated from acoustic unit discovery, we propose WavEmbed, a multimodal sequential autoencoder that predicts hidden units from

a dense representation of speech. Secondly, we also propose S-HuBERT to induce meaning through knowledge distillation, in which

a sentence embedding model is first trained on hidden units and passes its knowledge to a speech encoder through contrastive learning. The best performing model achieves a moderate correlation (0.5∼0.6) with human judgments, without relying on any labels or transcriptions. Furthermore, these models can also be easily extended to leverage textual transcriptions of speech to learn much better speech embeddings that are strongly correlated with human annotations. Our proposed methods are applicable to

the development of purely data-driven systems for speech mining, indexing and search.