Wednesday, 28 March 2018

Leveraging translations for speech transcription in low-resource settings. (arXiv:1803.08991v1 [cs.CL])

Recently proposed data collection frameworks for endangered language documentation aim not only to collect speech in the language of interest, but also to collect translations into a high-resource language that will render the collected resource interpretable. We focus on this scenario and explore whether we can improve transcription quality under these extremely low-resource settings with the assistance of text translations. We present a neural multi-source model and evaluate several variations of it on three low-resource datasets. We find that our multi-source model with shared attention outperforms the baselines, reducing transcription character error rate by up to 12.3%.



from cs updates on arXiv.org https://ift.tt/2E0IgfM
//

Related Posts:

0 comments:

Post a Comment