Does France need an AI authority?

An almost final version the AI ​​Law has been circulating for several days, although the final vote on the European regulation is scheduled for early February. If this project is confirmed, France you will not have been able to obtain the concessions you expected in the base models.

But according to the Montaigne Institute, there is another sector where France is failing. For the think tank, the State is “lacking” in the reflection on the Security Institutes of the AI. Worse, she is “unforgivably late.”

An assessment center to unite the AI ​​industry

In his observation January 2024, the Institute highlights that “several leading AI countries” – the United Kingdom, the United States and Singapore – have taken the lead in announcing the creation of such entities.

For members of the think tank, European regulations – which provide for the creation of an AI Office – are not enough. They encourage French policymakers to align themselves with their economic partners and rivals.

The first step would therefore consist of creating an AI Assessment Center this year. A form of Cyber ​​​​Campus dedicated to AI? The center would bring together “the forces and actors present” in the AI ​​sector.

It would only be one step, however. The Montaigne Institute calls for the creation, by law, of a future AI Authority, similar to other authorities available in France, such as Arcep or the Competition Authority.

An economic and regulatory authority

The role of this new Authority: “to advance AI issues in France, in addition to becoming a contact to facilitate compliance with the application of European regulations in this area”, describes the note.

The Institute sees this approach as a way for France to master AI through two levers: “the identification and assessment of risks, on the one hand, and governance measures focused on risks, on the other”.

The note considers it necessary to provide ourselves in France with “the means to guarantee the main AI models and their uses, while promoting their development”. The Authority would thus represent the balance to be found between risk management and “encouraging innovation”.

Securing large AI models

As an independent administrative authority, it would fulfill three main missions: evaluate the main state-of-the-art AI base models; coordinate a French strategy for the standardization of AI at European level; Set up tools and an end-to-end support program for AI companies.

This aid program aims precisely to simplify and cover the costs of complying with the AI ​​Law. To fulfill this role, the Authority would, in particular, mobilize the skills of other actors, including the CNIL – the competent authority on compliance and protection issues related to AI.

France “must provide itself with a recognized and independent space for assessment and skills in AI, capable of objectifying risks, of being a relay and interlocutor in the emerging institutional ecosystem in Europe, to participate in the international debate and guide public authorities and the companies”.

Leave a Comment