France Pouts Over AI Law. The text was announced with fanfare by Thierry BretonEuropean Commissioner, in December 2023. Constantin Pavléas, a lawyer specializing in the law of new technologies, assesses the continuation of France’s reaction in recent weeks.
The European regulation on artificial intelligence (AI) will be adopted soon. The stated objective of European authorities is to promote a regulatory environment that encourages the development of artificial intelligence, while preventing its abuse.
An almost final text has been circulating since last week
However, according to France, this regulation risks restricting innovation by European startups. A more balanced alternative is, without a doubt, possible to preserve startups and at the same time remunerate content authors. The near-final text in the English version of the European AI regulation was circulated last week. It is the result of a fiercely negotiated compromise between governments, after being the subject of a political agreement at the beginning of December.
Meanwhile, the working groups wrote the final articles. Much to the chagrin of French positions, the text subjects manufacturers of AI models to transparency obligations regarding the sources used to train the algorithms. Does the new European AI regulation go too far? Can you stop the ongoing artificial intelligence revolution? To this question, the French public authorities seem to answer in the affirmative.
The French government is particularly concerned about the obligation imposed on manufacturers of AI models to “ make public a sufficiently detailed summary » data used to train your software. The government proposes that data communication be reserved to the future European AI office, which would act as a trusted third party.
A trusted third party role for the future European AI Cabinet
This AI office could inform concerned rights holders about whether their content has been used by AI, but would not share this information with competing companies. Professional organizations in cinema, music, publishing and the audiovisual sector were moved by the French positions, fearing that they would weaken the regulations protecting copyright.
Everyone agrees that the necessary protection against the dangers that AI poses to citizens’ rights must not slow down or weaken innovation. The large EU countries, mainly France, Germany and Italy, want to preserve as much room for maneuver as possible for their companies – especially startups – to influence the AI market in the face of American or Chinese competition.
At a time when these startups are raising hundreds of millions of euros to accelerate the design and commercialization of their products, we cannot run the risk of hindering their development.
Legislation without unnecessary bureaucracy
It is in this context that, a few weeks before the decision on the AI Law, the competent ministers of France, Italy and Germany met and insisted on the need to establish legislation “ no unnecessary bureaucracy“ . They called, in particular, for the reduction of administrative burdens for projects involving multiple Member States, as well as the formation of a strong European venture capital ecosystem. that would allow innovative companies to receive the financing they so desperately need“.
There is another point that interests France. This is the threshold at which the most powerful models are considered “systemic” and are subject to enhanced obligations to assess and mitigate risks such as bias, misinformation, errors, etc. The computational calculation power criterion used is 10 raised to the power of 25 Flops, which is the unit of measurement for the calculation speed of a computational system.
The concerns expressed by France about the room for maneuver of its companies in the face of restrictive rules are widely shared by countries wishing to invest in AI, particularly in Europe, but not only. India, which positions itself as a major player in AI and makes it a key accelerator of its technological development, expresses the same kind of fears about the risks that AI giants pose to its digital sovereignty.
Limit disclosure of sources to those that integrate AI models
One way to overcome this divide between regulation and innovation would have been to effectively limit the disclosure of sources only to system providers that integrate and commercialize AI models, leaving these system providers the task of obtaining the necessary information from their manufacturers.
Above all, we could create a secondary right to remunerate content authors whose work has been used for algorithm training purposes, such as the related right established for press publishers (Article 13 of the 2019 Copyright Directive ). Finally, to avoid the trap of endless negotiations between interested parties in each European Union country, the new neighboring law could be remunerated by a European fund for compensation of authors of industry-funded content.
Important information on this subject