Downloads: 268

Files in This Item:
File Description SizeFormat 
j.csl.2012.02.003.pdf517.86 kBAdobe PDFView/Open
Title: A monotonic statistical machine translation approach to speaking style transformation
Authors: Neubig, Graham
Akita, Yuya  kyouindb  KAKEN_id
Mori, Shinsuke  kyouindb  KAKEN_id
Kawahara, Tatsuya  kyouindb  KAKEN_id  orcid (unconfirmed)
Keywords: Rich transcription
Speaking style transformation
Disfluency detection
Weighted finite state transducers
Monotonic machine translation
Issue Date: Oct-2012
Publisher: Elsevier Ltd.
Journal title: Computer Speech & Language
Volume: 26
Issue: 5
Start page: 349
End page: 370
Abstract: This paper presents a method for automatically transforming faithful transcripts or ASR results into clean transcripts for human consumption using a framework we label speaking style transformation (SST). We perform a detailed analysis of the types of corrections performed by human stenographers when creating clean transcripts, and propose a model that is able to handle the majority of the most common corrections. In particular, the proposed model uses a framework of monotonic statistical machine translation to perform not only the deletion of disfluencies and insertion of punctuation, but also correction of colloquial expressions, insertions of omitted words, and other transformations. We provide a detailed description of the model implementation in the weighted finite state transducer (WFST) framework. An evaluation of the proposed model on both faithful transcripts and speech recognition results of parliamentary and lecture speech demonstrates the effectiveness of the proposed model in performing the wide variety of corrections necessary for creating clean transcripts.
Rights: © 2012 Elsevier Ltd.
この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。This is not the published version. Please cite only the published version.
DOI(Published Version): 10.1016/j.csl.2012.02.003
Appears in Collections:Journal Articles

Show full item record

Export to RefWorks

Export Format: 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.