AI image generators are fun to play with, but the problem with generative AI is, well, it’s generative. It absorbs information from the internet and spits out content influenced by that info.
If you’re an artist or photographer, you probably don’t appreciate AI “learning” from and copying your art without compensation. If you’re someone who appears in a photograph, you probably don’t want AI reimagining your likeness doing something weird.
To that end, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed a new tool called “PhotoGuard” to protect images from malicious editing, per Engadget.
How it works
The smallest unit of information in an image is called a pixel. PhotoGuard changes certain pixels in a way that’s imperceptible to humans, but that throws off AI.
There are two methods:
- The Encoder attack makes it so AI can’t understand what it’s looking at.
- The Diffusion attack makes AI see an image as something else, rendering all edits unrealistic and unusable.
Here’s a video demo.
It’s not foolproof…
… per MIT doctoral student and lead author Hadi Salman. But Salman suggested that the companies that make AI models could offer APIs to protect — or “immunize” — other people’s photos.
And that might not be a bad idea, considering the numerous lawsuits regarding models trained on the work of authors, musicians, and other creators without consent.
For example, Getty Images is suing Stability AI, alleging it copied 12m+ images without permission or pay. Yikes.
BTW: If you want to play around with PhotoGuard yourself, the code is on GitHub.
Get the 5-minute roundup you’ll actually read in your inbox
Business and tech news in 5 minutes or less