Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models

Tel Aviv University
SIGGRAPH 2023
* Denotes equal contribution
Attend_and_Excite

Given a pre-trained text-to-image diffusion model, our method guides the generative model to modify the cross-attention values during the image synthesis process to attend to all subjects in the text. Stable Diffusion alone (top row) struggles to generate multiple objects (e.g., a horse and a dog). However, by incorporating Attend-and-Excite (bottom row) to strengthen the subject tokens (marked in blue), we achieve images that are more semantically faithful with respect to the input text prompts.

Abstract

Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g. colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention- based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen — or excite — their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts.

Video

Examples of Text-to-Image Generation with Attend-and-Excite

How does it work?

cars peace

We observe two semantic issues of existing text-to-image models. First, catastrophic neglect, where one or more subjects is not generated by the model, such as the cat in the example on the left. Second, incorrect attribute bindings, where an attribute such as a color is incorrectly matched to a subject. In the example on the right, the bench is incorrectly colored yellow instead of brown.

cars peace

Text conditioning in Stable Diffusion is performed via the cross-attention mechanism. As illustrated on the second row, the attention matrix can be reshaped to obtain a spatial map per text token. Intuitively, for a token to be manifested in the generated image, there should be at least one patch in its map with a high activation value. Attend-and-Excite embodies this intuition by shifting the latent such that it is encouraged to attend to all subject tokens in the text.
Given a prompt (e.g., "A lion with a crown"), we extract the subject tokens (lion, crown), and their corresponding attention maps (second row). We apply a Gaussian kernel on each attention map to obtain smoothed attention maps that consider the neighboring patches. Our optimization enhances the maximal activation of the most neglected token at timestep t and updates the latent code accordingly (row 3). In other words, we shift the latent in the current timestep such that the attention maps for the updated latents attend to the most neglected token.

Using Cross-Attention as Explanation

Uncurated Results

cars peace

Uncurated results with the same 8 random seeds comparing Stable Diffusion and Attend-and-Excite.