Globally, there is a desire to better understand and be more informed about the consequences of generative AI, especially at work. Furthermore, 70% of French people believe that generative AI should be under human control, for review and verification after intervention.
Towards trustworthy AI…
The study reveals partial confidence among the French towards AI: 30% express an essential need to develop “trustworthy” AI. This involves orchestrating tailored control and regulation in synergy with innovation and progress to develop Artificial Intelligence that safely meets citizens’ expectations.
68% of employees use ChatGPT at work without their employer’s knowledge.
Knowing that ChatGPT is a public and unprotected application, there is a real control problem to implement for many companies. The issue of ethics, although complex and dependent on the sociocultural context, is quickly addressed in the same way as the issue of legal responsibility. To answer crucial questions of ethics and data protection, or to limit the number of controversies and numerous incidents linked to generative AI, it is necessary to create trustworthy AI.
2/3 of French people support trustworthy AI
According to an IBM study, implementing ethical AI in business is done step by step. First, we contextualize its use in the company’s overall strategic vision. Then we establish governance that can guarantee its implementation. Finally, we integrate it into the economic cycle, involving interested parties and organizing a structure. It is also necessary to define a policy, managing and supporting culture, as well as internal training. Implementing a rigorous methodology and process is necessary.
GDPR and AI, what balance?
Today, AI technologies feed on enormous amounts of data, much of which is personal and even sensitive. It is therefore necessary to find a balance between the protection of the latter, compliance with legal standards and the development of this technology. AI has its share of complexity in terms of GDPR, with risks that can lead to discrimination, data theft, breach of privacy or mismanagement of information.
In response to these questions, the CNIL has made a number of public resources available to respond to the challenges posed by AI. Among them: how to protect a system or ensure transparency, know the issues related to the use of AI, how to self-assess your AI system, etc.
As a witness to the actions implemented on a large scale: the AI Law. It is the European regulation that aims to regulate the uses of artificial intelligence, classifying applications according to their level of ethical risk: minimum, limited, high or unacceptable. Its aim is to ensure that AI systems placed on the European market are safe and respect citizens’ fundamental rights and EU values. All of this guarantees legal certainty and reinforces security requirements.
Data protection and end-to-end encryption
To avoid violating user rights, it is necessary to respect some fundamental rules of the GDPR, namely:
-Data compliance and legitimacy of use
-Collection and compliance with consent
-Limiting data use to what is necessary
-Data security guarantee
-Need for information and transparency
To comply with GDPR, companies can opt for end-to-end encryption. It is a security technique that ensures that only the sender and recipient can access the content of the data exchanged. This even happens when they go through third-party networks. When it comes to artificial intelligence and data protection, end-to-end encryption plays a crucial role. It helps ensure the confidentiality and security of sensitive information used in AI systems.
Implement security audits
The issue surrounding security audits is significant. Companies today are more exposed to cyberattacks than ever before. It is therefore essential to regularly implement a dynamic process to ensure the reliability and timeliness of the data protection measures in place. Otherwise, companies are exposed to the loss of confidential data that could have more or less serious financial, human or moral consequences.
With the exchange of enormous amounts of data linked to AI, it is essential that companies effectively protect their Information Systems. Therefore, it is sensible that the development of trustworthy AI involves implementing these audits in order to detect and prevent potential security flaws.
They can take different forms depending on the company’s specific objectives and needs:
-Social engineering test
5 direct actions for your business
Transparency and traceability: Provide clear explanations to your users about how data is used, collected or processed, as well as the decisions made by AI systems.
Data protection: Ensuring that data security is a priority in the development of AI algorithms and models. Make sure you are up to date with GDPR, AI Law and any applicable laws.
Responsible data management: Implement robust methodology and governance protocols around data management.
Security and continuous assessment: Carry out regular protocols to assess the risks and effectiveness of measures already in place. Be updated and certified by third-party professionals.
Training and awareness: Train your teams on ethical issues and best practices in terms of data protection. Encourage a safety-focused culture within your organization by recruiting an Ethics Officer if necessary.