QuAC

Question Answering in Context

What is QuAC?

Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.


QuAC paper


QuAC is meant to be an academic resource and has significant limitations. Please read our detailed datasheet before considering it for any practical application.


Datasheet

Is QuAC exactly like SQuAD 2.0?

No, QuAC shares many principles with SQuAD 2.0 such as span based evaluation and unanswerable questions (including website design principles! Big thanks for sharing the code!) but incorporates a new dialog component. We expect models can be easily evaluated on both resources and have tried to make our evaluation protocol as similar as possible to their own.

Getting Started

Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):

To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use
python scorer.py --val_file <path_to_val> --model_output <path_to_predictions> --o eval.json;.

Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the test set to the public. Instead, we require you to submit your model so that we can run it on the test set for you. The submission process is very similar to SQuaD 2.0 (coming soon):

Submission Tutorial

Baseline Models

All baseline models are available through AllenNLP.AllenNLP Model

How do I get the duck in my paper?

First, download the duck The DuckThen, put this macro in your latex: \newcommand{\daffy}[0]{\includegraphics[width=.04\textwidth]{path_to_daffy/daffyhand.pdf}}
Finally, enjoy the command \daffy in your paper!

Have Questions?

Ask us questions at our google group or at eunsol@cs.washington.edu hehe@stanford.edu
miyyer@cs.umass.edu marky@allenai.org

Leaderboard

There can be only one duck.

RankModelF1HEQQHEQD
Human Performance

(Choi et al. EMNLP '18)
81.1100100

Aug 20, 2018
BiDAF++ w/ 2-Context (baseline)

single model

60.154.84.0

2

Aug 20, 2018
BiDAF++ (baseline)

single model

50.243.32.20