With techniques like watermarking partially preventing unauthorised image manipulations, an MIT researcher has invented a tool called “PhotoGuard” – that will prevent the AI tools from manipulating images without permission – and value an artist’s life.

Blocking Unauthorised Image Manipulation

With Generative AI creating endless possibilities, researchers and artists are worried that their work is taken for granted. And this is more concerning to designers, whose work is exploited by AI for nothing in return.

Tools like Dall-E and Stable Diffusion can manipulate an image as desired, although some deficiencies exist. But with them getting more potent in every update, artists of all kinds are on the hunt to stop it. And MIT’s new tool will exactly do that and give them the reputation they deserve.

Named PhotoGuard, the MIT CSAIL tool will stop the unauthorised manipulation of images in multiple ways. One version is by altering select pixels in a vision to confuse the AI’s ability to understand what the image is. Researchers called these changes “perturbations” and are invisible to the human eye.

Next up is an “Encoder” attack where the target image (which will be manipulated by AI) is tuned with algorithms to confuse the AI. This process involves adding complex mathematics to mend the position and colour of every pixel in an image.

And lastly, the more advanced “Diffusion” model can camouflage an image as a different image in the eyes of the AI. The tool feeds a fake target image to the AI to work on, where all the changes are applied to the fake version while keeping the original intact. This results in generating an unrealistic-looking image.

Talking about this new tool, MIT doctorate student and the lead author of this paper, Hadi Salman, said

“A collaborative approach involving model developers, social media platforms, and policymakers presents a robust defense against unauthorized image manipulation.”