The agreement, which excludes China, includes recommendations on monitoring AI systems and their potential abuse, data protection and control of software providers.
This weekend, 22 judicial bodies and intelligence agencies from 18 countries (including France) signed an international agreement on AI safety. The goal is to make the latest versions of the technology “secure by design”, comes a few months later validation by the European Parliament of the AI Law, in June. This text, still under discussion, prohibits certain AI technologies, including biometric surveillance and police prediction for crime prevention, and classifies AI systems that can have a significant impact on health as high-risk systems, security, rights or elections. “AI systems can bring many benefits to society. But for the possibilities offered by AI to be fully realized, it must be developed, deployed and operated in a safe and responsible way”, states the agreement. He believes that, given the rapid pace of development of AI, the issue of security should not be left aside, but should be considered as a fundamental requirement of the life cycle of these systems. “They are exposed to new security vulnerabilities that must be taken into account in the same way as traditional cybersecurity threats,” says the report.
“When the pace of development is high, as is the case with AI, security can often take a backseat. With a security differential specific to AI: the existence of a phenomenon called “adversarial machine learning”. Described as critical by the report, adversarial BC is defined as the strategic exploitation of fundamental vulnerabilities inherent in BC components. By manipulating these elements, adversaries can potentially disrupt or trick AI systems, resulting in erroneous results or compromised functionality. In addition to the proposal for a European regulation on AI, in the United States, President Joe Biden signed an executive order in October aims to regulate the development of AI by requiring developers of powerful AI models to share safety findings and critical information with the government.
China, excluded from the agreement
Although it is one of the driving forces in the development of AI and is the subject of several trade sanctions by the United States to limit its access to the high-power silicon necessary for the development of AI, China is absent from the agreement . In a speech at an event organized by the Taiwan Chamber of Commerce on Sunday, Mark Liu, president of TSMC, said that the United States’ decision to exclude China would lead to a global slowdown in innovation and a fragmentation of globalization.
AI continues to be a legal minefield
Although non-binding, the agreement mainly offers general recommendations and does not address complex issues regarding appropriate applications of AI or data collection methods for LLMs. It does not address ongoing civil litigation in the United States over how systems ingest data to develop their large language models and whether these practices comply with human rights laws.
In the United States, several authors filed complaints against OpenAI and Microsoft for copyright and intellectual property infringement, believing that his creative works were used without his consent in the formation of OpenAI’s ChatGPT. The complaint highlights growing concerns about the impact of AI on traditional creative and journalism industries. According to K&L Gates, OpenAI and other defendants in these cases assert lack of jurisdiction and use the fair use doctrine to defend themselves. Add to this the skepticism of the courts in early cases, and we can say that the future of AI litigation is “uncertain.”