Generating Coherent And Informative Descriptions For Groups Of Visual Objects And Categories: A Simple Decoding Approach

Nazia Attari, David Schlangen, Martin Heckmann, Heiko Wersing, Sina Sina Zarrieß

Oral Session 4 - Thursday 07/21 16:40 EST
Abstract: State-of-the-art image captioning models achieve very good performance in generating descriptions for instances of visual categories and reasoning about them, e.g. imposing distinctiveness of the description in the context of distractors. In this work, we propose an inference mechanism that extends an instancelevel captioning model to generate coherent and informative descriptions for groups of visual objects from the same or different categories. We test our model in the domain of bird descriptions. We show that group-level descriptions generated by our method are (i) coherent, pulling together properties that are true for all or majority of its instances, and (ii) informative, as they allow an external BERT-based text classifier to identify the target category more accurately in comparison to single-instance captions and are preferred by human evaluators.