Cybersecurity: why AI will never be smarter than a human

Katerina Tasiopoulou has already had a remarkable career in a relatively short space of time. Awarded the BCS Young Professional Award in 2018 during her time as an Incident Response Engineer at IBM, Katerina’s determination, knowledge and skill led her to become CEO of her own successful cybersecurity business. Here Katerina explains the challenges of cybersecurity in the world of AI, her career path and her advice for the next generation of cybersecurity professionals.

Can you tell us about your experience and career to date?

I actually wanted to be an aerospace engineer, but everything changed when I learned some basic computer codes at school. One day at the school assembly they showed us some computer code on the screen and I raised my hand because I knew it was wrong. There was an error in the logical flow of the code and when I pointed it out, the teacher was shocked. I became very interested in coding from that point on. I started studying computer science and then cybersecurity at university level before working in various roles at several companies, including IBM.

After that I founded my own company, Exelasis. My current work is focused on advanced penetration testing and ethical hacking, or “red teaming”. I initially worked on the incident response and intelligence side, or ‘blue team’, because being able to defend myself against real threat actors taught me a lot of skills that I could use on the red team side. It is a very interesting, exciting and rewarding career.

Did you find it scary to enter a career where women are, unfortunately, still a minority?

It has been very challenging at times. At university there were no other women in my class and, at the time, it was difficult for me to understand why. In the world of work, although things are starting to improve, they have not yet changed drastically. I must admit, as a woman working in cybersecurity, I have sometimes been unappreciated in a room. Sometimes I don’t feel like I’ve been listened to or treated seriously.

But it’s important to make it clear that I don’t want to be appreciated because I’m a woman, I want to be appreciated because I’m educated, qualified, a team player, a collaborator, because I have contributions and because I’m a cybersecurity professional. Equality does not have to be because of gender, but equality must be independent of gender. There is still a lot of work to do and it is fantastic that organizations like BCS are helping to change attitudes.

You won the BCS Young Professional award in 2018. What did that mean to you?

The Young Professional Award has been absolutely invaluable in helping me with recognition, eminence, confidence and networking. It’s not really about the prize itself, it’s about how you use it. Even today, as the CEO of a company, I still mention the award in my presentations to show that I was recognized for my efforts and to show others the progression that is possible with determination and self-confidence. I imagine I’ll still be referring to this 10 years from now. It also helped me a lot to build my network. I always consider the award as a first step on the path to where I am today, because when I left the stage that night I set a goal; to run my own business.

In your role, what are the main challenges arising from the evolution of the cyber threat?

Technology is evolving incredibly quickly, but cybersecurity is not evolving at the same pace. The biggest challenge, from my perspective, is not the fact that there are new threats, although that is obviously a huge concern, it is the fact that we have not even fully addressed the old threats yet. Some organizations – banks, for example – are based on very old foundations and principles, with various systems and layers added over time. Add AI to the mix and suddenly it amounts to a new layer of complexity. Cyber ​​security means defend your assets, but how do you do this as technology evolves and threat parameters change? As goals move, it can be difficult as a company to know whether you have met your goals in terms of cybersecurity compliance.

How do you begin to address these challenges?

The only way is to get the basics right, not just in terms of compliance but also in terms of technical assessment. It is very important to take a step back. As a company, your goal is to protect your assets, but the only way to really be sure you’ve done that is to try to hack your defenses. Therefore, penetration testing or ethical hacking is the ultimate test. Starting with a basic penetration test allows you to identify current gaps and close critical ones.

Only by continually performing these tests can you truly see your exposure. In cybersecurity there are many different elements and they change daily. Secure coding or programming is not the same as it was 10 years ago and even the largest organizations need to release new patches all the time because new problems continually arise. There are many changes, but penetration testing is the only solid approach, in my technical opinion, that consistently evaluates defenses from a perspective.

What about advances in AI in relation to hacking – what are the implications here?

AI will be the new battlefield. Right now it’s people against people. In cybernetics, adaptation and evolution always come from one side or the other. Attackers are using AI, so now we have to start using AI to continue to defend our assets. And we will need to adapt and expand the parameters of attack and defense whenever the next advance appears, which will probably be quantum. AI is a relatively new tool and we are still learning. In my line of work, it will certainly be very useful for threat detection, incident response, and data correlation and analysis.

Breaches are knowledge-based and all about data, and AI can help us understand data and look for patterns in hacker activity much faster. On the other hand, the use of AI in cyber warfare raises many questions. I don’t know how we started and it’s hard to understand who has the advantage here.

Could AI be a useful tool for penetration testing and red teaming tasks?

AI is not just a button you press and perform an activity; in fact, you are constantly feeding it information. If AI starts to be used for penetration testing, collecting threat data, and learning advanced hacking techniques, then it will be a very dangerous weapon if it falls into the wrong hands. I think a moral code around what we teach AI, and the ethics around its use, is absolutely critical, because once we start teaching it, we can’t go back.

If we train AI to become the world’s most powerful hacking weapon, what will happen? We need moral and ethical codes, but who will guarantee compliance? Many companies still struggle to properly comply with the GDPR, and the complexities of AI and how to govern its use create an entirely different problem that will certainly require serious reflection in the very near future.

What do you think of the idea of ​​AI replacing human roles in cybersecurity?

Personally, I see AI as a tool to improve, not replace. I recently read an article that mentioned that AI could replace some of the most basic roles in IT and security, such as SOC analyst, for example. In fact, this is one of the crucial development roles that I always recommend to anyone looking to pursue a career in security. If this role doesn’t exist in the future, what does that mean for those looking to enter the industry? You have to start with the basics because no one can automatically become an expert overnight.

Leave a Comment