JVTech News Researchers just found a way to poison AI to force them to answer for their actions
Scientists managed to poison images so that artificial intelligence could not use them. So when we see a certain image, the AI sees something entirely different.
Poisoning in the name of copyright
published on arXiv highlighted a surprising breakthrough: instead of simply exploiting the capabilities of artificial intelligence, researchers managed to develop an innovative method to poison them.
The discovery depends on the ingenious use of what researchers call “digital poison.” Instead of directly interrupting the operation of AIs, this method aims to alter the training data. Using algorithms, researchers were able to inject altered data, causing situations where AIs make potentially negative decisions.
This approach raises fundamental questions about the reliability of AI systems and the need for accountability mechanisms. By allowing artificial intelligences to be affected by external influences, researchers hope to encourage deeper reflection on the design and implementation of these technologies.
In the study, we thus discovered images that the human eye perceives as normal, but that AI interprets in a very different way. Thus we see a dog become a cat, a car become a cow or a cubist work become a drawing.
A story of ethics above all
The implications of this discovery transcend technical and scientific domains to encompass ethical considerations. If AI can be “poisoned” to induce unintended behavior, this raises concerns about the security of these systems, especially in sensitive sectors such as healthcare and finance.
However, researchers highlight the challenges of detecting such attacks. Digital venom is designed to be subtle, defying traditional intrusion detection methods. This complexity highlights the urgent need to develop more advanced detection techniques to protect AI systems against malicious influences.
This study appears in a context where discussions about the ethics of AI are gaining importance. Governments, businesses and civil society must work together to develop clear laws that regulate the use of AI. The study warns of serious abuses if adequate measures are not taken.
We can thus imagine a model that poisons AI for malicious purposes. We could thus have a superpower that uses this type of process to influence and direct an AI. Therefore, safeguards must be implemented to prevent any deviation.