×

Cyber Security and AI: Insights from David Stapleton

This post was written by: Daisy Steel

AI has been sweeping the internet for months since the release of Chat GPT 3. As the world looks at the implications of these powerful new AI models, the cyber security industry is no exception. On Episode 17 of The Cyber Security Matters Podcast we spoke to David Stapleton, the CISO at CyberGRX, who we met at the RSA conference. With over 20 years of experience in business administration, cyber security, privacy and risk management, David has a unique expertise that makes him the perfect person to share insights on the relationship between Cyber Security and AI. Read on to hear his thoughts! 

A lot of attention has been paid to AI – with good reason. I have this mental model where if my mother is aware of something that’s in my field, that’s when it’s really reached the public Zeitgeist. When she asked me a question about the security of AI, I knew it wasn’t a niche topic anymore. 

Artificial intelligence is an interesting phenomenon. Conceptually, it’s not that different from any other rapid technological advancement that we’ve had in the past. Anytime these things have come up, the same conversations have started to happen. With the advent of cloud there was a real fear that was sparked – particularly in the cybersecurity community – around the lack of control over those platforms. We had to trust other people to do the right thing. How do I present that risk to the board and get their approval for that? Maybe it’s a good financial decision, but we are introducing unnecessary risks. 

Another example of that may have been the movement towards Bring Your Own Device (BYOD) and allowing people to connect their personal devices to company networks and data. That sounds terrifying from a security perspective, but you can see how that opens the door to increased productivity, efficiency and flexibility. 

AI is not too dissimilar from that perspective, and we can see plenty of positive aspects to the utilisation of artificial intelligence. It’s a catalyst for productivity which could provide exposure to multiple different data points and bring together salient insights in a way that it’s hard for the human mind to do at that kind of a speed. It can also reduce costs, bring additional value to stakeholders and potentially help companies gain competitive advantages. 

Conversely, there are potential risks. It is such a new technology, and we’re still learning about how it works as we’re using it. There’s a lot of questions from a legal perspective about the ownership of the output of different AI technologies, particularly with the tools that produce audio visual outputs. The true implementation and impact of that isn’t going to be known until the courts have worked those details out for us. 

We’re in a position now where some companies have taken a look at AI and said, ‘We don’t know enough about this, but we feel the risk is too great, so we’re going to prohibit the utilisation of these tools.’ Other companies are taking the exact opposite approach: ‘We also don’t know a whole lot about this, but we’re going to pretend this problem doesn’t exist until things work themselves out.’ 

At CyberGRX we’re taking a middle of the road approach where we’re treating AI models as another third party vendor that we’re using for work purposes. We’re going to share access or data with that tool, but we need to analyse it from a security risk and legal risk perspective before we approve its utilisation. That’s a fairly long-winded way of saying that there are amazing opportunities for AI but there are risks. 

We’ve already seen threat actors starting to use artificial intelligence to beef up their capabilities. You could understand logically how artificial intelligence gives a fledgling or would-be threat actor the ability to get in the game and take action sooner than they otherwise would be able to. When Chat GPT first was released to the public, the very first thing that I put into it was ‘Write a keylogger in Python’. That’s a little piece of malware that will log your keystrokes and collect things like passwords or credentials. It just did it. It was there on the screen as a perfectly legitimate piece of software. Since then they’ve tightened the controls, but there was a time when someone with bad intent could start producing different types of malicious software without even learning to code.

To learn more about the uses of AI in Cyber Security, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Let's talk

    Or contact us on one of our social profiles.

    Facebook Icon Twitter Icon LinkedIn Icon