DraGAN

Precise. Flexible. Picture Perfect.


Primary Use

DragGAN is an AI tool that controls generative adversarial networks (GANs) with increased flexibility and precision. It enables users to manipulate visual content by adjusting the pose, shape, expression, and layout of generated objects through interactive point-based manipulation. 

The tool allows precise deformation of images across various categories, such as animals, humans, cars, and landscapes, even in challenging scenarios.

The DragGAN AI photo editor is user-friendly and offers features like point-based editing, a 3D model, and an open-source tool. Users can easily edit images of animals, vehicles, people, and landscapes by adjusting layout, poses, shapes, and expressions. 

The editing process is straightforward, requiring users to simply point out desired positions, and the DragGAN photo editor handles the rest automatically.

The process of DragGAN involves feature-based motion supervision to move handle points to target positions and a new point tracking approach that utilizes GAN features to localize handle points’ positions. 

The results demonstrate an advantage over previous methods in image manipulation and point tracking. Additionally, the tool supports the manipulation of real images through GAN inversion.

Developed by researchers from Google, the Max Planck Institute of Informatics, and MIT CSAIL, DragGAN is currently in the demo stage and supports 2D images. Future plans include releasing a version that works with 3D models as well. The tool’s ability to manipulate pixels with precision and flexibility showcases the potential for GANs in synthesizing visual content for various applications.

Leave a Comment