Non-confusing Generation of Customized Concepts in Diffusion Models

1 Zhejiang University     2 Huawei Cloud Computing     3 Nanyang Technological University     4 Tsinghua University     5 Harbin Institute of Technology     6 Skywork AI, Singapore
Paper Code Dataset

Abstract

We tackle the common challenge of inter-concept visual confusion in compositional concept generation using text-guided diffusion models (TGDMs). It becomes even more pronounced in the generation of customized concepts, due to the scarcity of user-provided concept visual examples. By revisiting the two major stages leading to the success of TGDMs---1) contrastive image-language pre-training (CLIP) for text encoder that encodes visual semantics, and 2) training TGDM that decodes the textual embeddings into pixels---we point that existing customized generation methods only focus on fine-tuning the second stage while overlooking the first one. To this end, we propose a simple yet effective solution called CLIF: contrastive image-language fine-tuning. Specifically, given a few samples of customized concepts, we obtain non-confusing textual embeddings of a concept by fine-tuning CLIP via contrasting a concept and the over-segmented visual regions of other concepts. Experimental results demonstrate the effectiveness of CLIF in preventing the confusion of multi-customized concept generation.

Traing Data

MY ALT TEXT Concpet Bank. We curate a dataset consisting of $18$ representative characters, including 9 real-world, 4 3D-animated, and 5 2D-animated. Each of them possesses unique visual appearances that must be preserved in the customized generation.
MY ALT TEXT Pipeline of training data curation. We mix the customized concepts and common concepts at instance-level and segmentation-level, to help decoupling multi-concept token embeddings which can eliminate the confusion issues.

Method

MY ALT TEXT
Our two stage framework for multi-concept learning. We first fine-tune the text encoder to get contr-astive concept embeddings, and then fine-tune the text-to-image decoder to synthesizing non confu-sing images.

Qualitative Results

MY ALT TEXT Single Concpet
MY ALT TEXT Multi Concpets

More Results

MY ALT TEXT Figure 1. The experimental results demonstrate that our approach generalizes well to Dreambench and successfully mitigates the confusion between general objects, such as dog1 and dog2.
MY ALT TEXT Figure 2. We supplement results for the customized generation of 3-5 concepts, our method still remains clearly superior to other methods.