Cybersecurity and generative AI, an urgent need for regulation

While regulatory measures on generative AI adopted globally cover very diverse uses, more guidance on permitted uses of the technology is needed. According to the Aspen Institute, this is the case in cybersecurity, where time is of the essence.

From the particular to the general. More and more countries are imposing regulatory restrictions on AI and its generative variation, but in certain areas, such as cybersecurity, imprecision is present. In a report from the Aspen Institute (managers’ think tank), the contribution of these technologies to IT protection is undeniable, but at the same time cybercriminals are also increasingly using them.

According to the authors, it is up to regulators and industry groups to ensure that the benefits of generative AI are not ruined by its potential misuse. “Actions taken today by governments, businesses and administrations will determine tomorrow whether attackers or defenders will benefit most from this emerging capability,” the report states.

A Varying Answer on Generative AI Oversight

The regulatory approach adopted by major nations such as the United States, the United Kingdom and Japan has been different, as has that of the United Nations and the European Union. According to the Aspen Institute, the UN has focused on security, accountability and transparency, through several subgroups such as UNESCO, an interagency working group on AI and a high-level advisory body under the authority of the Secretary-General. The European Union has been particularly aggressive in its efforts to protect privacy and respond to security threats posed by generative AI, with the AI ​​Law, adopted in December 2023. It contains numerous provisions relating to transparency, data protection and rules regarding to the formation of models. data.

Legislative inaction in the United States has not stopped the Biden administration from issuing an Executive Order on AI, which provides “guidance and benchmarks for evaluating AI capabilities,” with particular emphasis on characteristics of AI that could cause harm. “CISA has also published non-binding guidance, in collaboration with UK regulators,” the authors further indicate. “Japan, on the other hand, provides an example of a softer approach to AI regulation from a cybersecurity perspective, as it focuses more on disclosure channels and developer feedback loops than on rules strict or risk assessments,” noted the Aspen Institute.

Avoid losing trust in AI

The report also highlights that time is running out. Security breaches committed by generative AI have an eroding effect on public trust, and this AI acquires new capabilities that could be used for nefarious purposes virtually every day. “As this trust erodes, we risk losing the opportunity to have proactive debates about the permissible uses of generative AI in threat detection and to examine the ethical dilemmas surrounding cyber defenses autonomously as the market advances,” the report states.

Leave a Comment