Third Time's the Charm?

Image and Video Editing with StyleGAN3



1Tel-Aviv University

2Hebrew University of Jerusalem

3Adobe Research


* Denotes Equal Contribution



Advances in Image Manipulation Workshop - ECCV 2022



Abstract

StyleGAN is arguably one of the most intriguing and well-studied generative models, demonstrating impressive performance in image generation, inversion, and manipulation. In this work, we explore the recent StyleGAN3 architecture, compare it to its predecessor, and investigate its unique advantages, as well as drawbacks. In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery. Next, our analysis of the disentanglement of the different latent spaces of StyleGAN3 indicates that the commonly used W/W+ spaces are more entangled than their StyleGAN2 counterparts, underscoring the benefits of using the StyleSpace for fine-grained editing. Considering image inversion, we observe that existing encoder-based techniques struggle when trained on unaligned data. We therefore propose an encoding scheme trained solely on aligned data, yet can still invert unaligned images. Finally, we introduce a novel video inversion and editing workflow that leverages the capabilities of a fine-tuned StyleGAN3 generator to reduce texture sticking and expand the field of view of the edited video.

Image Editing

In the paper, we examine the effectiveness of various techniques for image editing with StyleGAN3. We show that methods that worked well with previous Style-based generators are also compatible for editing both synthetic and real images with StyleGAN3. However, since the latent spaces of StyleGAN3 are more entangled than its predecessors, we demonstrate that using the StyleSpace for latent-based editing is of high importance in the newer StyleGAN3 generator. By employing the translation and rotations on the Fourier features, we are able to edit both aligned and unaligned images, even when the generator itself was trained solely on aligned data.

In the above videos, we demonstrate edits on real images obtained using InterFaceGAN and StyleCLIP alongside the original inputs shown to the left. In the first row, we show edits of smile, black hair, red lipstick, and age while in the second row we show edits of smile, blonde hair, gender, and age.

Inverting StyleGAN3 With an Encoder

We find that training an encoder to directly invert unaligned images is challenging as the encoder must capture both the input pose and identity, making the training objective quite difficult. We therefore choose to train the encoder solely on aligned images. Having trained our encoder, at inference time, we leverage the translation and rotation control offered by the Fourier features of StyleGAN3 for inverting unaligned. Yet, we find that in StyleGAN3, encoding into well-behaved regions is even more important compared to StyleGAN2, as editability quickly decreases as we move away from the W latent space. We therefore pair the e4e encoder with the ReStyle iterative scheme to ensure that the predicted latent codes are close to W and more editable.

Here we show the reconstructions of real unaligned images alongside the original inputs shown to the left.

Inverting and Editing Videos

The equivariance property of StyleGAN3 makes it potentially more siutable for encoding and editing videos. Beyond its ability to generate high-quality unaligned images, it has been shown that, unlike previous generators, StyleGAN3 does not have the texture-sticking phenomenon. In this paper, we combine our encoder with PTI to attain faithful and consistent reconstructions of a full video. Finally, we employ latent-based editing techniques to edit the video, and perform smoothing on the resulting latent codes to improve the video consistency. Below are some examples of our results where we show the original, reconstructed, and edited videos side-by-side:

We can also train StyleGAN-NADA and edit videos using various styles such as a Pixar cartoon or sketch.

Expanding Field-of-View

A significant challenge when aligning the frames before editing a video (e.g., with StyleGAN2), is editing attributes that overflow outside the frame. By employing the translation equivariance of StyleGAN3, we are able to alleviate this challenge. Given a latent vector w, we generate the desired image, and various shifted versions of the original algined frame. We then merge the non-overlapping regions of these images and obtain an image with an expanded field-of-view containing the entire region of interest. This allows us to edit attributes (e.g., hair color) of a given video, even when they overflow outside the frame.

As an example, consider the example video Bruno Mars below where we show the original video on the left alongside the reconstruction on the right. Notice that often times, the hair "spills out" of the frame. In such a case, editing the entirety of Mars' hair is not possible and will lead to inconsistent when pasted back to the original context.

By using our expansion approach, we can reconstruct a wider field of view containing the entire head across the entire video (as shown on the left). We can then edit Mars' hair color consistently and in its entirety, as shown in our result in the middle.
In contrast, on the right, we show the result obtained when we don't use the expansion trick. Notice that the hair region that was missing from the original video is unedited, resulting in a non-uniform editing.



BibTeX

@misc{alaluf2022times,
      title={Third Time's the Charm? Image and Video Editing with StyleGAN3},
      author={Yuval Alaluf and Or Patashnik and Zongze Wu and Asif Zamir and Eli Shechtman and Dani Lischinski and Daniel Cohen-Or},
      year={2022},
      eprint={2201.13433},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

We would like to thank William Peebles for letting us use his project page template.