We tackle the common challenge of inter-concept visual confusion in compositional concept generation using text-guided diffusion models (TGDMs). It becomes even more pronounced in the generation of customized concepts, due to the scarcity of user-provided concept visual examples. By revisiting the two major stages leading to the success of TGDMs---1) contrastive image-language pre-training (CLIP) for text encoder that encodes visual semantics, and 2) training TGDM that decodes the textual embeddings into pixels---we point that existing customized generation methods only focus on fine-tuning the second stage while overlooking the first one. To this end, we propose a simple yet effective solution called CLIF: contrastive image-language fine-tuning. Specifically, given a few samples of customized concepts, we obtain non-confusing textual embeddings of a concept by fine-tuning CLIP via contrasting a concept and the over-segmented visual regions of other concepts. Experimental results demonstrate the effectiveness of CLIF in preventing the confusion of multi-customized concept generation.
Concpet Bank. We curate a dataset consisting of $18$ representative characters, including 9 real-world, 4 3D-animated, and 5 2D-animated. Each of them possesses unique visual appearances that must be preserved in the customized generation.
Pipeline of training data curation. We mix the customized concepts and common concepts at instance-level and segmentation-level, to help decoupling multi-concept token embeddings which can eliminate the confusion issues.
Single Concpet
Multi Concpets
Figure 1. The experimental results demonstrate that our approach generalizes well to Dreambench and successfully mitigates the confusion between general objects, such as dog1 and dog2.
Figure 2. We supplement results for the customized generation of 3-5 concepts, our method still remains clearly superior to other methods.