Thursday, 20 September 2018

Multi-Task Learning for Machine Reading Comprehension. (arXiv:1809.06963v1 [cs.CL])

We propose a multi-task learning framework to jointly train a Machine Reading Comprehension (MRC) model on multiple datasets across different domains. Key to the proposed method is to learn robust and general contextual representations with the help of out-domain data in a multi-task framework. Empirical study shows that the proposed approach is orthogonal to the existing pre-trained representation models, such as word embedding and language models. Experiments on the Stanford Question Answering Dataset (SQuAD), the Microsoft MAchine Reading COmprehension Dataset (MS MARCO), NewsQA and other datasets show that our multi-task learning approach achieves significant improvement over state-of-the-art models in most MRC tasks.



from cs updates on arXiv.org https://ift.tt/2PQm2U7
//

Related Posts:

0 comments:

Post a Comment