stimulate innovation while strengthening cybersecurity

While this agreement aims to spur progress in this area, leaders must simultaneously strengthen their cybersecurity posture to better understand the risks induced by artificial intelligence in 2024.

In 2023, the professional landscape, and beyond, has been heavily influenced by a significant increase in the adoption of artificial intelligence (AI) tools like ChatGPT. However, despite the multiple benefits it offers, this technology has raised concerns among business leaders, particularly with regard to IT security. Advances in generative AI models, which target users’ primary senses such as vision and hearing, are being exploited by cybercriminals. This trend increases the complexity of detecting attacks, especially phishing.

In 2024, workforce mobility will pose an increased risk when it comes to AI tools, as employees will operate from their own networks and devices, which are often less secure. This transition will make it easier for cybercriminals to use increasingly credible methods to carry out social engineering attacks. Previously robust security measures like voice identity verification when resetting a password will become obsolete due to generative AI. While ChatGPT cannot yet produce convincing results for spear phishing, it could improve the overall quality of phishing campaigns, avoiding grammatical errors, spelling errors, and misinformation, for example.

As technology continues to advance and transform society in 2024, companies must take appropriate action by implementing cyberthreat-friendly means, including phishing-resistant multi-factor authentication (MFA), like physical security. This is crucial for protecting confidential data as well as critical assets. At the same time, these initiatives must be accompanied by ongoing security training for personnel, thereby strengthening defenses and ideally positioning companies to combat emerging cyber threats.


Leave a Comment