Look And Answer The Question: On The Role Of Vision In Embodied Question Answering
Nikolai Ilinykh, Yasmeen Emampoor, Simon Dobnik
Demo/Poster Session 2 - Wednesday 07/20 15:00 EST
Abstract:
We focus on the Embodied Question Answering (EQA) task, the dataset and the models (Das et al., 2018). In particular, we examine the effects of vision perturbation at different levels by providing the model with either incongruent, black or random noise images. We observe that the model is still able to learn from general visual patterns, suggesting that they capture some common sense reasoning about the visual world. We argue that a better set of data and models are required to achieve better performance in predicting (generating) correct answers. The code is available here: https://github.com/ GU-CLASP/embodied-qa.