MRQA: Machine Reading for Question Answering

Workshop at EMNLP-IJCNLP 2019
November 4th, 2019
Room MR 201B-C
Contact: mrforqa@gmail.com

Accepted Papers

Following papers are accepted to MRQA 2019. Regular research track papers appear in the proceedings of the workshop, while non-arxival papers are not be included (but are given opportunity to present in the workshop). Accepted shared task papers are also below.

Research Track

Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Lei Cui, Songhao Piao and Ming Zhou

CALOR-QUEST : generating a training corpus for Machine Reading Comprehension models from shallow semantic annotations
Frederic Bechet, Cindy Aloui, Delphine Charlet, Geraldine Damnati, Johannes Heinecke, Alexis Nasr and Frederic Herledan

Improving Question Answering with External Knowledge
Xiaoman Pan, Kai Sun, Dian Yu, Jianshu Chen, Heng Ji, Claire Cardie and Dong Yu

Answer-Supervised Question Reformulation for Enhancing Conversational Machine Comprehension
Qian Li, Hui Su, CHENG NIU, Daling Wang, Zekang Li, Shi Feng and Yifei Zhang

Simple yet Effective Bridge Reasoning for Open-Domain Multi-Hop Question Answering
Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Hong Wang, Shiyu Chang, Murray Campbell and William Yang Wang

Improving the Robustness of Deep Reading Comprehension Models by Leveraging Syntax Prior
Bowen Wu, Haoyang Huang, Zongsheng Wang, Qihang Feng, Jingsong Yu and Baoxun Wang

Reasoning Over Paragraph Effects in Situations
Kevin Lin, Oyvind Tafjord, Peter Clark and Matt Gardner

Towards Answer-unaware Conversational Question Generation
Mao Nakanishi, Tetsunori Kobayashi and Yoshihiko Hayashi

Cross-Task Knowledge Transfer for Query-Based Text Summarization
Elozino Egonmwan, Vittorio Castelli and Md Arafat Sultan

BookQA: Stories of Challenges and Opportunities
Stefanos Angelidis, Lea Frermann, Diego Marcheggiani, Roi Blanco, Lluís Marquez

FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension
Yi-Ting Yeh and Yun-Nung Chen

Do Multi-hop Readers Dream of Reasoning Chains?
Haoyu Wang, Mo Yu, Xiaoxiao Guo, Rajarshi Das, Wenhan Xiong and Tian Gao

Machine Comprehension Improves Domain-Specific Japanese Predicate-Argument Structure Analysis
Norio Takahashi, Tomohide Shibata, Daisuke Kawahara and Sadao Kurohashi

On Making Reading Comprehension More Comprehensive
Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor and Sewon Min

Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering
Rajarshi Das, Ameya Godbole, Dilip Kavarthapu, Zhiyu Gong, Abhishek Singhal, Mo Yu, Xiaoxiao Guo, Tian Gao, Hamed Zamani, Manzil Zaheer and Andrew McCallum

Evaluating Question Answering Evaluation
Anthony Chen, Gabriel Stanovsky, Sameer Singh and Matt Gardner

Bend but Don’t Break? Multi-Challenge Stress Test for QA Models
Hemant Pugaliya, James Route, Kaixin Ma, Yixuan Geng and Eric Nyberg

ReQA: An Evaluation for End-to-End Answer Retrieval Models
Amin Ahmad, Noah Constant, Yinfei Yang and Daniel Cer

Comprehensive Multi-Dataset Evaluation of Reading Comprehension
Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Matt Gardner and Sameer Singh

A Recurrent BERT-based Model for Question Generation
Ying-Hong Chan and Yao-Chung Fan

Let Me Know What to Ask: Interrogative-Word-Aware Question Generation
Junmo Kang, Haritz Puerto San Roman and Sung-Hyon Myaeng

Extractive NarrativeQA with Heuristic Pre-Training
Lea Frermann

Cross Submissions (non-Arxival)

Errudite: Scalable, Reproducible, and Testable Error Analysis
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer and Daniel Weld
Published at ACL 2019

Are Red Roses Red? Evaluating Consistency of Question-Answering Models
Marco Tulio Ribeiro, Carlos Guestrin and Sameer Singh
Published at ACL 2019

Revealing the Importance of Semantic Retrieval for Machine Reading at Scale
Yixin Nie, Songhe Wang and Mohit Bansal
Published at EMNLP-IJCNLP 2019

Discourse-Aware Semantic Self-Attention for Narrative Reading Comprehension
Todor Mihaylov and Anette Frank
Published at EMNLP-IJCNLP 2019

Shared Task Track

D-NET: A Pre-Training and Fine-Tuning Framework for Improving the Generalization of Machine Reading Comprehension
Hongyu Li, Xiyuan Zhang, Yibing Liu, Yiming Zhang, Quan Wang, Xiangyang Zhou, Jing Liu, Hua Wu and Haifeng Wang

An Exploration of Data Augmentation and Sampling Techniques for Domain-Agnostic Question Answering
Shayne Longpre, Yi Lu, Zhucheng Tu and Chris DuBois

Generalizing Question Answering System with Pre-trained Language Model Fine-tuning
Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu and Pascale Fung

CLER: Cross-task Learning with Expert Representation to Generalize Reading and Understanding
Takumi Takahashi, Motoki Taniguchi, Tomoki Taniguchi and Tomoko Ohkuma

Domain-agnostic Question-Answering with Adversarial Training
Seanie Lee, Donggyu Kim and Jangwon Park

Question Answering Using Hierarchical Attention on Top of BERT Features
Reham Osama, Nagwa El-Makky and Marwan Torki