×

Addressing Human Behaviour in Cyber Security

This post was written by: Tom Wilding

In the Cyber Security industry, one of the biggest risk factors is human behaviour. On Episode 23 of The Cyber Security Matters Podcast we were joined by Ira Winkler, the Field CISO and VP at CYE. He shared his insights on the risks of human behaviour, as well as some great anecdotes from writing multiple books on cyber security. Read on to learn from his experience. 

How have you seen cyber risk progress over your career?

When I do speaking events, I always ask people ‘how many of you are security professionals?’ Most of the audience raises their hands and I go, ‘Okay, you’re all failures, because there is no such thing as security. The definition of security is being free from risk, and you’re never going to be free from risk. So technically, we’re all cyber risk managers.’ If we’re all risk managers, how are we mitigating those risks? I do what I call cyber risk optimization, where we’re quantifying and mapping out the risks according to actual attack paths and vulnerabilities. That allows us to determine how we optimise risk by taking your potential assets, mapping them to vulnerabilities to get an actual cost, and then figuring out which are the best vulnerabilities to theoretically mitigate. 

Now, we’re at a point where machine learning is actually able to start doing things we were not able to do before. Everybody thinks machine learning is this really fancy thing, but it’s taking big data and putting it through mathematical calculations that were not available to us 10 years ago. Now we’re actually able to crunch data, look at trends, and come up with actual calculations of how to optimise risk. I’m finally able to take the concepts I wrote about in 1996-97 and implement them today. 

How do you balance user responsibility and the responsibility of the operating system? 

The solution I’m putting together is human security engineering consortia, because here’s the problem: awareness is important. I wrote ‘Security Awareness for Dummies’ because awareness is a tactic. Data leak prevention can be important to stop major attacks, and anti malware can be important to stop major attacks, so those are tactics too. The problem is that currently, when we look at the user problem, it’s being solved with individual tactics that are not coordinated through a strategy. We need a strategy to look at it from start to finish that includes both the operating system and the user responsibilities. 

You’ve got to stop and think, ‘what are my potential attack vectors? What capabilities does a user have?’ A user can only do things that you enable them to do, they only have access to data you allow them to have, they only have a computer that has the capabilities you provide them. You need to stop and think, ‘given that finite set of capabilities and data provided to a user, what is the strategy that looks at it from start to finish and best mitigates the overall risk?’ I’m not saying you can get rid of risk completely, but you need to create a strategy to mitigate as much risk as possible from start to finish, knowing the capabilities you provide to the user. 

One of my books is ‘Zen and the Art of Information Security’, which includes a concept of what makes an artist, and it’s the person’s ability to look at a block of marble and see a figure in it. They can produce different pieces of art, but they’re all made the same way. There’s a repeatable process and what they use to get what they got. Now in the same way, there’s a repeatable process for looking at human-related errors. You look at the potential attacks against users and ask ‘What mighty users do, using good will, thinking they’re doing the right thing but accidentally causing harm?’ Most damage to computer systems is done by well-meaning users who inevitably create harm. 

You don’t go around and see people saying, ‘I’m getting in my car and crashing into another car’ – that’s why they’re called accidents. We have a science in how we design roads, literally the curvature of roads is a science and when they assign speed limits to it there is a science to understanding what a user does, what their capabilities are, and how you can mitigate that to reduce the risks. In cyber risk, you should be asking similar questions, like ‘How can I proactively analyse how the user gets in the position to potentially initiate a loss and mitigate that proactively?’ Then you design the operating system to reduce the user’s inadvertent risks. 

To learn more about human behaviour and risk in Cyber Security, tune into Episode 23 of The Cyber Security Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Let's talk

    Or contact us on one of our social profiles.

    Facebook Icon Twitter Icon LinkedIn Icon