FlexCap: Describe Anything in Images in Controllable Detail

Debidatta Dwibedi1, Vidhi Jain1,2, Jonathan Tompson1, Andrew Zisserman1, Yusuf Aytar1
   1Google DeepMind      2Carnegie Mellon University   

Accepted at NeurIPS 2024

Abstract

We introduce a versatile flexible-captioning vision-language model (VLM) capable of generating region-specific descriptions of varying lengths. The model, FlexCap, is trained to produce length-conditioned captions for input bounding boxes, and this allows control over the information density of its output, with descriptions ranging from concise object labels to detailed captions. To achieve this we create large-scale training datasets of image region descriptions of varying length, starting from captioned images.
This flexible-captioning capability has several valuable applications. First, FlexCap demonstrates superior performance in dense captioning tasks on the Visual Genome dataset. Second, a visual question answering (VQA) system can be built by employing FlexCap to generate localized descriptions as inputs to a large language model. The resulting system achieves state-of-the-art zero-shot performance on a number of VQA datasets. We also demonstrate a localize-then-describe approach with FlexCap can be better at open-ended object detection than a describe-then-localize approach with other VLMs. We highlight a novel characteristic of FlexCap, which is its ability to extract diverse visual information through prefix conditioning. Finally, we qualitatively demonstrate FlexCap's broad applicability in tasks such as image labeling, object attribute recognition, and visual dialog.

Describing Same Region with Different Lengths


FlexCap generates controllably rich localized descriptions for any region in an image. It has the flexibility to produce captions in a controllable manner which allows the full spectrum of valid descriptions to be explored from short object category names to fully-detailed captions.

For results on Length Conditioned Captions , click on any image below to inspect closely.




Describing Different Regions of Same Image



FlexCap can help in open-world detection by describing salient regions. Unlike prior dense captioning works, FlexCap generates more diverse sentences to describe visual content in controllable detail.

Here we present interactive showcase of results for region captioning here. Explore the images for Interactive Region-Captioning. Click on any image below to inspect closely.




Extracting Object Attributes with Prefixes

Training FlexCap on a large dataset leads to an emergent capability: the model can extract desired information for a specific image region using input prefixes. We present below some examples of attributes that FlexCap can generate.

Click on the image to inspect the bounding box and caption closely.


Human Action

The prefix for extracting actions: The person is _____

Object Use

The prefix for extracting the function of an object: This is used for _____

Text

The prefix for performing OCR: The sign says _____

Book Title

The prefix for extracting book titles from cover pages: This book is called _____

Author

The prefix for extracting authors from cover pages: Written by _____

Photo Location

The prefix for extracting the location of a photo: The photo was taken _____

Noteworthy

The prefix for extracting noteworthy aspects of an image: Notice _____

Object Material

The prefix for extracting material: It is made of _____

Object Color

The prefix for extracting color: The color is _____




FlexCapLLM

Rich localized captions generated by FlexCap can be easily passed onto Large Language Models (LLMs) to enable zero-shot visual question answering.

Here we present some of the results of FlexCapLLM. Click on any of the images to inspect closely. Note: in the images below "FlexCap" refers to the system "FlexCapLLM".




BibTeX
@inproceedings{
dwibedi2024flexcap,
title={FlexCap: Describe Anything in Images in Controllable Detail},
author={Debidatta Dwibedi and Vidhi Jain and Jonathan Tompson and Andrew Zisserman and Yusuf Aytar},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=P5dEZeECGu}
}