This is a system demonstration of the CoQEx methodology introduced by Ghosh et al. Given a full-fledged or a telegraphic query on entity counts, CoQEx
uses a span-based QA model to separately extract candidate count contexts and instances from the top-50 search-engine snippets. The user is shown the following components.
An answer inference is predicted by a distribution-aware inference over count contexts.
The count contexts are further classified into semantic groups with respect to the inferred answer to form the explanation by contexts. They are grouped based on whether the contexts are quite similar to the inferred answer or if they represent a subset of the inferred answer or if they are incomparable.
The instances are ranked by their compatibility with the answer type. They form the explanation by instances since they likely ground the counts into their constituting entities. CoQEx extracts the answer type from the query.
The snippets are annotated with the count context and instance candidates to form the explanation by provenance.
You can use this demonstration to ask any query on entity counts (see suggested questions)*.
For instance, how many languages are spoken in Indonesia?, is a query about the entity Indonesia and the set of languages spoken in the country. We can answer this query through context containing counts, such as, estimated 700 languages and grounding instances, such as, Javanese and Sundanese.
For immediate results try questions from the CoQuAD dataset (in the dropdown) or browse through (recent questions) to see how CoQEx performed on previous unseen user queries.
*We have a daily query limit of 100 live questions.
Explanation by Contexts
Explanation by Instances
Explanation by Provenance ( Snippets)
Model confidence thresholds for candidate selection:
Count contexts =
Threshold not crossed