hide
Free keywords:
Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Computation and Language, cs.CL,Computer Science, Learning, cs.LG
Abstract:
Despite significant success in Visual Question Answering (VQA), VQA models
have been shown to be notoriously brittle to linguistic variations in the
questions. Due to deficiencies in models and datasets, today's models often
rely on correlations rather than predictions that are causal w.r.t. data. In
this paper, we propose a novel way to analyze and measure the robustness of the
state of the art models w.r.t semantic visual variations as well as propose
ways to make models more robust against spurious correlations. Our method
performs automated semantic image manipulations and tests for consistency in
model predictions to quantify the model robustness as well as generate
synthetic data to counter these problems. We perform our analysis on three
diverse, state of the art VQA models and diverse question types with a
particular focus on challenging counting questions. In addition, we show that
models can be made significantly more robust against inconsistent predictions
using our edited data. Finally, we show that results also translate to
real-world error cases of state of the art models, which results in improved
overall performance