QuAC

Question Answering in Context

What is QuAC?

Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.


QuAC paper

QuAC poster


QuAC is meant to be an academic resource and has significant limitations. Please read our detailed datasheet before considering it for any practical application.


Datasheet

Is QuAC exactly like SQuAD 2.0?

No, QuAC shares many principles with SQuAD 2.0 such as span based evaluation and unanswerable questions (including website design principles! Big thanks for sharing the code!) but incorporates a new dialog component. We expect models can be easily evaluated on both resources and have tried to make our evaluation protocol as similar as possible to their own.

Getting Started

Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):

To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use
python scorer.py --val_file <path_to_val> --model_output <path_to_predictions> --o eval.json; .

Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the test set to the public. Instead, we require you to submit your model so that we can run it on the test set for you. The submission process is very similar to SQuaD 2.0 (Live!):

Submission Tutorial

Baseline Models

All baseline models are available through AllenNLP. Specifically, model is here and the configuration is here. AllenNLP Model

How do I get the duck in my paper?

First, download the duck The Duck Then, put this macro in your latex: \newcommand{\daffy}[0]{\includegraphics[width=.04\textwidth]{path_to_daffy/daffyhand.pdf}}
Finally, enjoy the command \daffy in your paper!

Have Questions?

Ask us questions at our google group or at eunsol@cs.washington.edu hehe@stanford.edu
miyyer@cs.umass.edu marky@allenai.org

Leaderboard

There can be only one duck.

Rank Model F1 HEQQ HEQD
Human Performance

(Choi et al. EMNLP '18)
81.1 100 100

Jun 23, 2019
TransBERT (single model)

Anonymous

69.4 65.4 9.3

2

Apr 24, 2019
Bert-FlowDelta (single model)

Anonymous

67.8 63.6 12.1

3

June 13, 2019
Context-Aware-BERT
(single model)

Anonymous

69.6 65.7 8.1

4

Mar 14, 2019
ConvBERT (single model)

Joint Laboratory of HIT and iFLYTEK Research

68.0 63.5 9.1

5

May 21, 2019
HAM (single model)

Anonymous

65.4 61.8 6.7

6

Mar 7, 2019
BERT w/ 2-context
(single model)

NTT Media Intelligence Labs

64.9 60.2 6.1

7

Feb 21, 2019
GraphFlow (single model)

Anonymous

64.9 60.3 5.1

8

Sep 26, 2018
FlowQA (single model)

Allen Institute of AI

https://arxiv.org/abs/1810.06683

64.1 59.6 5.8

9

Aug 20, 2018
BERT + History Answer Embedding (single model)

UMass Amherst, Alibaba PAI, Rutgers University

https://arxiv.org/abs/1905.05412

62.4 57.8 5.1

10

Aug 20, 2018
BiDAF++ w/ 2-Context
(single model)

baseline

60.1 54.8 4.0

11

Aug 20, 2018
BiDAF++ (single model)

baseline

50.2 43.3 2.2