NAIROBI, Kenya, Oct 30-A university of Chicago professor has invented a tool that can prevent an Artificial Intelligence model from manipulating an image.

The tool; Night shade ,which has been nicknamed as the “poison pill” was created by Ben Zhao a university professor and is said to trick an AI model during it’s training phase into cataloging an image as something else other than it is causing the model to generate useless results.

“It works at training time and destabilizes it [the model] for good. Of course, the model trainers can just revert to an older model, but it does make it challenging for them to build new models,” Zhao said.

Night shade will enable artists who are aiming at protecting their work from AI models, to add pixels to their art that could confuse image generating models that tend to copy the art. The tool will also give creator a way to penalize AI developers who try to use their work without permission without resulting to law suits.

“The tool can undermine how AI models categorize specific images. The effect is for animals to be labeled as plants, or buildings as flowers, and for these errors to create further problems in the model’s general features,” stated Nightshade developers.

Zhao has described the tool as the last defense for content creators against web scrapers that ignore opt-out/ do not crawl detectives.

The tool’s development follows a number of attempts to protect the images such as creating watermarks which include those invisible from the eye but they have proven ineffective, since they are broken easily e.g cloaking that is offered by glaze.”We don’t have a reliable watermarking at this point,” stated University of Maryland computer science professor Soheil Feizi.

Subscribe to our newsletter to get interesting news stories everyday

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
×