×

Inside Data Loss Prevention

This post was written by: Daisy Steel

In recent years there have been growing concerns around privacy and data loss. On Episode 22 of The Cyber Security Matters Podcast we spoke to Chris Denbigh-White, the Chief Security Officer at Next, about data loss and how it’s affecting the industry. Here are his thoughts: 

Data loss prevention has always been the ugly friend of cyber security. If you mention DLP to 9 out of 10 cyber professionals they’ll say, ‘this doesn’t work, but we’ve got to do it’. It’s effectively a tick-box exercise, but it’s a box that does nothing. It’s the old adage of a firewall that has allow rules going both ways. We have to do it though, because otherwise some of our users either complain massively, or are blocked from doing their job. That’s something that Next aims to address; we’re trying to provide DLP that makes sense. That means using machine learning to understand user behaviour. 

I like to understand people’s business processes and build guardrails around what they actually need for security. We’re here to ensure that people who do business and make money don’t lose all their data or have it stolen, as well as protecting them from getting massive GDPR fines. Security itself doesn’t make the business any money, but not having security can cost a business a lot. That means that we need to understand what is valuable to the business and find a way to protect it. 

That’s different from typical data loss prevention tools. We need to understand things like ‘how does this company deal with things like insider risk and insider threats?’ We’ll think outside the box, like ‘Why don’t we address risks through behavioural change and training people on better cyber practices, rather than relying on draconian controls?’ I strongly believe that what we’re doing increases business cadence and reduces friction by approaching DLP in that way. That’s something that I think AI and machine learning are going to help people understand better, because they’ll be used to understand the people around us better and therefore they’ll uncover internal and external threat actors more effectively. 

The way that we approach things is by helping companies understand what normal is, and helping them to address the question ‘Am I happy with what that normal is?’ Our solutions are built by asking things like, ‘Do I want people uploading things to this web application and not that web application?’ That’s a well trodden path to data loss. Another common issue is the use of copy and paste. On one hand, I want users to be able to copy and paste because we’re advocates of strong and long passphrases and the use of password managers – all of which utilise copy and paste. But on the other hand, I don’t want people copying and pasting swathes of sensitive data from sensitive apps and into a text file that’s then emailed off. 

We’ve moved away from just file based data loss, because people lose data in more ways than you’d think. There are copy and pastes, web uploads, Chat GPT prompts… being able to understand and control your data in those ways is its own tool. There’s a business process where we help companies identify their normal and their risks, then we set up specialised guardrails in a super simple process. I think that’s the future of the space. Companies that develop schooling to support security that’s done with people are going to succeed moving forward, whereas increasing levels of draconian control and intrusions are going to come to an end. 

To learn more about protecting your data, tune into Episode 22 of The Cyber Security Matters Podcast

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Let's talk

    Or contact us on one of our social profiles.

    Facebook Icon Twitter Icon LinkedIn Icon