Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language

1Shanghai AI Laboratory 2Fudan University 3S-Lab, Nanyang Technological University
4Peking University 5Shanghai Jiao Tong University

TL;DR: Auto Cherry-Picker is an innovative training data generator for cross-modality perception and reasoning tasks. It produces scalable synthetic data that aligns with real-world distributions while maintaining high quality.

Abstract

Diffusion models can generate realistic and diverse images, potentially facilitating data availability for data-intensive perception tasks. However, leveraging these models to boost performance on downstream tasks with synthetic data poses several challenges, including aligning with real data distribution, scaling synthetic sample volumes, and ensuring their quality. To bridge these gaps, we present Auto Cherry-Picker (ACP), a novel framework that generates high-quality cross-modality training samples at scale to augment perception and multi-modal training. ACP first uses LLMs to sample descriptions and layouts based on object combinations from real data priors, eliminating the need for ground truth image captions or annotations. Next, we use an off-the-shelf controllable diffusion model to generate multiple images. Then, the generated data are refined using a comprehensively designed metric, Composite Layout and Image Score (CLIS), to ensure quality. Our customized synthetic high-quality samples boost performance in various scenarios, especially in addressing challenges associated with long-tailed distribution and imbalanced datasets. Experiment results on downstream tasks demonstrate that ACP can significantly improve the performance of existing models. In addition, we find a positive correlation between CLIS and performance gains in downstream tasks. This finding shows the potential for evaluation metrics as the role for various visual perception and MLLM tasks.

ACP Framework

Illustration of Auto Cherry-Picker pipeline. It contains a (a) raw data generator and a (b) data filter using CLIS. Conditioned on input object combination sampled from data priors, Scene Graph Generator generates detailed attributes, relations, captions, and corresponding layouts. Subsequently, the Image Generator produces images based on the scene graph. These raw layouts and images are refined through filters using CLIS-L and CLIS-I, respectively, to produce high-quality training data.

Efficacy of CLIS

Each pair of generation results is based on the same input object combinations and synthetic descriptions.

Comparison of generation results with and without CLIS.

Consistent with human judgement.

Correlation between CLIS and performance gains on downstream tasks.

Each pair of synthetic samples is generated on the same input object list, with our CLIS metric favoring the right sample in each pair.

Comparison of CLIS-I with other prevalent metrics.

Comparison of CLIS-L and the HRS metric.

ACP Performance

Data scaling on LVIS.

Visualization of synthetic samples under different scenarios.

BibTeX

article{chen2024autocherrypicker,
      title={Auto Cherry-Picker: Learning from High-quality Generative Data Driven by Language},
      author={Chen, Yicheng and Li, Xiangtai and Li, Yining and Zeng, Yanhong and Wu, Jianzong and Zhao, Xiangyu and Chen, Kai}
      journal={arXiv preprint arXiv:2406.20085},
      year={2024},
    }