StyleGAN is arguably one of the most intriguing and well-studied generative models, demonstrating impressive performance in image generation, inversion, and manipulation. In this work, we explore the recent StyleGAN3 architecture, compare it to its predecessor, and investigate its unique advantages, as well as drawbacks. In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery. Next, our analysis of the disentanglement of the different latent spaces of StyleGAN3 indicates that the commonly used W/W+ spaces are more entangled than their StyleGAN2 counterparts, underscoring the benefits of using the StyleSpace for fine-grained editing. Considering image inversion, we observe that existing encoder-based techniques struggle when trained on unaligned data. We therefore propose an encoding scheme trained solely on aligned data, yet can still invert unaligned images. Finally, we introduce a novel video inversion and editing workflow that leverages the capabilities of a fine-tuned StyleGAN3 generator to reduce texture sticking and expand the field of view of the edited video.
In the paper, we examine the effectiveness of various techniques for image editing with StyleGAN3. We show that methods that worked well with previous Style-based generators are also compatible for editing both synthetic and real images with StyleGAN3. However, since the latent spaces of StyleGAN3 are more entangled than its predecessors, we demonstrate that using the StyleSpace for latent-based editing is of high importance in the newer StyleGAN3 generator. By employing the translation and rotations on the Fourier features, we are able to edit both aligned and unaligned images, even when the generator itself was trained solely on aligned data.
We find that training an encoder to directly invert unaligned images is challenging as the encoder must capture both the input pose and identity, making the training objective quite difficult. We therefore choose to train the encoder solely on aligned images. Having trained our encoder, at inference time, we leverage the translation and rotation control offered by the Fourier features of StyleGAN3 for inverting unaligned. Yet, we find that in StyleGAN3, encoding into well-behaved regions is even more important compared to StyleGAN2, as editability quickly decreases as we move away from the W latent space. We therefore pair the e4e encoder with the ReStyle iterative scheme to ensure that the predicted latent codes are close to W and more editable.
The equivariance property of StyleGAN3 makes it potentially more siutable for encoding and editing videos. Beyond its ability to generate high-quality unaligned images, it has been shown that, unlike previous generators, StyleGAN3 does not have the texture-sticking phenomenon. In this paper, we combine our encoder with PTI to attain faithful and consistent reconstructions of a full video. Finally, we employ latent-based editing techniques to edit the video, and perform smoothing on the resulting latent codes to improve the video consistency. Below are some examples of our results where we show the original, reconstructed, and edited videos side-by-side:
We can also train StyleGAN-NADA and edit videos using various styles such as a Pixar cartoon or sketch.
A significant challenge when aligning the frames before editing a video (e.g., with StyleGAN2), is editing attributes that overflow outside the frame. By employing the translation equivariance of StyleGAN3, we are able to alleviate this challenge. Given a latent vector w, we generate the desired image, and various shifted versions of the original algined frame. We then merge the non-overlapping regions of these images and obtain an image with an expanded field-of-view containing the entire region of interest. This allows us to edit attributes (e.g., hair color) of a given video, even when they overflow outside the frame.
As an example, consider the example video Bruno Mars below where we show the original video on the left alongside the reconstruction on the right. Notice that often times, the hair "spills out" of the frame. In such a case, editing the entirety of Mars' hair is not possible and will lead to inconsistent when pasted back to the original context.
@misc{alaluf2022times,
title={Third Time's the Charm? Image and Video Editing with StyleGAN3},
author={Yuval Alaluf and Or Patashnik and Zongze Wu and Asif Zamir and Eli Shechtman and Dani Lischinski and Daniel Cohen-Or},
year={2022},
eprint={2201.13433},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
We would like to thank William Peebles for letting us use his project page template.