Detail publikace
Unsupervised Language Model Adaptation for Speech Recognition with no Extra Resources
BENEŠ, K. IRIE, K. BECK, E. SCHLÜTER, R. NEY, H.
Originální název
Unsupervised Language Model Adaptation for Speech Recognition with no Extra Resources
Typ
článek ve sborníku mimo WoS a Scopus
Jazyk
angličtina
Originální abstrakt
Classically, automatic speech recognition (ASR) models are decomposed into acoustic models and language models (LM). LMs usually exploit the linguistic structure on a purely textual level and usually contribute strongly to an ASR systems performance. LMs are estimated on large amounts of textual data covering the target domain. However, most utterances cover more specic topics, e.g. in uencing the vocabulary used. Therefore, it's desirable to have the LM adjusted to an utterance's topic. Previous work achieves this by crawling extra data from the web or by using signicant amounts of previous speech data to train topic-specic LM on. We propose a way of adapting the LM directly using the target utterance to be recognized. The corresponding adaptation needs to be done in an unsupervised or automatically supervised way based on the speech input. To deal with corresponding errors robustly, we employ topic encodings from the recently proposed Subspace Multinomial Model. This model also avoids any need of explicit topic labelling during training or recognition, making the proposed method straight-forward to use. We demonstrate the performance of the method on the Librispeech corpus, which consists of read ction books, and we discuss it's behaviour qualitatively.
Klíčová slova
speech recognition
Autoři
BENEŠ, K.; IRIE, K.; BECK, E.; SCHLÜTER, R.; NEY, H.
Vydáno
18. 3. 2019
Nakladatel
DEGA Head office, Deutsche Gesellschaft für Akustik
Místo
Rostock
ISBN
978-3-939296-14-0
Kniha
Proceedings of DAGA 2019
Strany od
954
Strany do
957
Strany počet
4
URL
BibTex
@inproceedings{BUT160005,
author="BENEŠ, K. and IRIE, K. and BECK, E. and SCHLÜTER, R. and NEY, H.",
title="Unsupervised Language Model Adaptation for Speech Recognition with no Extra Resources",
booktitle="Proceedings of DAGA 2019",
year="2019",
pages="954--957",
publisher="DEGA Head office, Deutsche Gesellschaft für Akustik",
address="Rostock",
isbn="978-3-939296-14-0",
url="https://www.dega-akustik.de/publikationen/online-proceedings/"
}
Dokumenty