×

Facing Challenges in the Cyber Security Industry 

The Cyber Security industry faces challenges on a daily basis due to the nature of its work. However, its challenges aren’t just security threats. On Episode 24 of The Cyber Security Matters Podcast we were joined by Michele Chubirka, a Cloud Security Advocate at Google, to talk about the wider challenges in the industry. Michelle has led a remarkable two-decade career in cyber security and has a background as a cloud native expert, giving her a wealth of insights into the space. Here’s what she shared with us: 

“Information security can be a struggle. There’s something called witnessing windows or common shock, which is when we see the small violence and violation that happens in our day to day lives. Well, that’s information security to a tee. You have the big breaches and traumatic events – you’re reading about it now with the movement hacks, ransomware, etc. – but every day you experience the vulnerabilities in your organisation. You report on them, saying ‘Hey, you have these vulnerabilities and they don’t get remediated’, and the solution technically seems very simple, but it’s really an adaptive challenge because it has a lot of dependencies and unpredictable human beings are involved. 

A lot of security people experience burnout after a while, because you want to do the right things, but there’s a social issue where people don’t or won’t collaborate well enough to solve the problem. Cyber Security is a challenging field because people are drawn to doing technical things and being engineers, but then find out that they have to work with people, which is a very different skill set. When I started, teams were super small and you could solve a problem end to end yourself. That’s not the case anymore. Now you have huge teams of hundreds of people working on a single application. Now you have to worry about getting people to talk to each other. You have to resolve conflict. 

I wish somebody had taught me to improve my people skills as well as focussing on my technical skills in my professional development. The social science that I’m studying is restorative practices and restorative justice, which is about building human capital or social capital by finding ways to repair harm, restore relationships and build community. If our organisations and companies aren’t communities, we’re going to struggle to build a truly secure cyber environment. 

The problem is that people are really attached to this idea of security being like law enforcement or a military framework. We think of threats as attackers, and there’s a lot of accepted victim shaming. When something happens within an organisation and the bad guys leave, you’ve got to clean up and recover from the trauma of what happened. That’s when the blame shifts. People start asking ‘Who can we blame internally for this problem?’ Then you get some victim-perpetrator oscillation where there’s a blaming game. Then the victims are being held to account as perpetrators because they didn’t secure their systems or they didn’t do the things that you asked them to do. That’s not helpful. 

There are a lot of reasons why developers don’t always write secure code or update their dependencies. Sometimes the systems that security people put in place are not friendly or easily consumable. Developers may be under really tight timelines and they’ve got way too much on their plates, so how much is really their fault? There are often swirling, interpersonal, conflict-ridden situations that create anger and resentment, because security professionals are doing their best but they feel like they can’t make enough change. This is exactly what happens when you’re faced with these witnessing windows, where people are disempowered but aware of what’s happening. When you’re in that situation, you know what the problem is but you can’t change it, the results are stress and eventual burnout. 

That’s really the problem with information security right now. People are building great technologies and there are new techniques coming out every year, but the attacks only get worse, and the job seems to get harder. So what are we doing? I think the reason that the situation is the way it is is because we’re having people problems – it’s not simply a technology problem. 

To learn more about the challenges facing the Cyber Security industry, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Inside Data Loss Prevention

In recent years there have been growing concerns around privacy and data loss. On Episode 22 of The Cyber Security Matters Podcast we spoke to Chris Denbigh-White, the Chief Security Officer at Next, about data loss and how it’s affecting the industry. Here are his thoughts: 

Data loss prevention has always been the ugly friend of cyber security. If you mention DLP to 9 out of 10 cyber professionals they’ll say, ‘this doesn’t work, but we’ve got to do it’. It’s effectively a tick-box exercise, but it’s a box that does nothing. It’s the old adage of a firewall that has allow rules going both ways. We have to do it though, because otherwise some of our users either complain massively, or are blocked from doing their job. That’s something that Next aims to address; we’re trying to provide DLP that makes sense. That means using machine learning to understand user behaviour. 

I like to understand people’s business processes and build guardrails around what they actually need for security. We’re here to ensure that people who do business and make money don’t lose all their data or have it stolen, as well as protecting them from getting massive GDPR fines. Security itself doesn’t make the business any money, but not having security can cost a business a lot. That means that we need to understand what is valuable to the business and find a way to protect it. 

That’s different from typical data loss prevention tools. We need to understand things like ‘how does this company deal with things like insider risk and insider threats?’ We’ll think outside the box, like ‘Why don’t we address risks through behavioural change and training people on better cyber practices, rather than relying on draconian controls?’ I strongly believe that what we’re doing increases business cadence and reduces friction by approaching DLP in that way. That’s something that I think AI and machine learning are going to help people understand better, because they’ll be used to understand the people around us better and therefore they’ll uncover internal and external threat actors more effectively. 

The way that we approach things is by helping companies understand what normal is, and helping them to address the question ‘Am I happy with what that normal is?’ Our solutions are built by asking things like, ‘Do I want people uploading things to this web application and not that web application?’ That’s a well trodden path to data loss. Another common issue is the use of copy and paste. On one hand, I want users to be able to copy and paste because we’re advocates of strong and long passphrases and the use of password managers – all of which utilise copy and paste. But on the other hand, I don’t want people copying and pasting swathes of sensitive data from sensitive apps and into a text file that’s then emailed off. 

We’ve moved away from just file based data loss, because people lose data in more ways than you’d think. There are copy and pastes, web uploads, Chat GPT prompts… being able to understand and control your data in those ways is its own tool. There’s a business process where we help companies identify their normal and their risks, then we set up specialised guardrails in a super simple process. I think that’s the future of the space. Companies that develop schooling to support security that’s done with people are going to succeed moving forward, whereas increasing levels of draconian control and intrusions are going to come to an end. 

To learn more about protecting your data, tune into Episode 22 of The Cyber Security Matters Podcast

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

From National Security to Cyber Security With Mark Daniel Bowling 

The Cyber Security space is an exciting one to be part of. On The Cyber Security Matters Podcast we regularly ask our guests how they get into the industry, and on Episode 21 our guest had a fascinating answer. We were joined by the CISO of ExtraHop Mark Daniel Bowling, who has over 20 years experience in Cyber Security, beginning as a special agent and cyber crimes investigator for the FBI. Since then he’s transitioned into several roles, most recently as the Chief Risk, Security, and Information Security Officer at ExtraHop. He shared the story of his unusual career path and his advice for other people who want to make a similar journey. 

How did you first get into the cybersecurity industry?

It was almost entirely a consequence of my service in the FBI. I spent six years in the United States Navy, where I was supposed to go into submarines, but I ended up on a carrier because we won the Cold War back in ‘91, so we just didn’t need as many subs. I did a little bit of time in the corporate world and didn’t love it, then I joined the FBI in 1995. That was right as cyber was becoming a thing. We didn’t even have a cyber division in the FBI back then, but we had a cyber investigation section coming out of the white collar branch. We created what was known as NIPC, or the National Infrastructure Protection Centre, then eventually when Muller came in, in 1999 or 2000, he created the cyber division. I grew up in the FBI and cyber at the same time, because I was an Electrical Engineering and Computer Engineering technologist, so it was the right place for me to go. 

I made a great career in cyber in the FBI. When I retired from the FBI I went to another agency, which was the Department of Education, making a transition from a very serious law enforcement and intelligence community agency to the one that was more public facing. After that I retired from federal service and then I went into the public sector as a full time employee, but then I started to move into the consultant track where I’ve had multiple great partnerships with customers, and it was really good. I went back to full time employee status when I came to ExtraHop a couple of years ago. So that’s the route that I took, but I would say my experience in the FBI was really what pushed me into cybersecurity.

Who or what has been the biggest influence in your career?

Because much of my career was in public service, the biggest influence has been the amazing public servants that I met in my career. My role model was a man in the United States Navy named Admiral Larsen. He was a four star Admiral, and I worked for him in the Pentagon. He was just an amazing man. Anybody who knew Admiral Larsen recognises what a great leader he was. 

In the FBI there were a couple of amazing public servants too. I would say David Thomas, who was one of the early assistant directors of the cyber division, was also a great man. He helped build the cyber programme within the FBI. He was one of the great men I knew in the FBI. 

And then at the Department of Education there was a man named Chuck Cox. He was in the Air Force Office of Special Investigations before he went over to the Office of the Inspector General. He has since passed away, but he was a tremendous man. Each of those individuals modelled public service in an amazing way for me.

How do you feel your background within the FBI has shaped your career working for a security vendor like extra hop?

I think it’s absolutely vital that anybody who works in security understands the nature of threat and risk. If all you do is think about technology, you’re missing the boat. The job of the business is to stay in business, make money, acquire and retain customers, sell more products, provide better services and increase not just your profit margin, but also your presence in whatever sector you’re in. They don’t want to have to worry about cyber security, so the cyber security folks have to understand the threats to the business for them. 

You have to be able to see things in terms of risk, and that’s what the FBI did for me. One of the things that Muller did when he came into the FBI was created priorities, and we created those priorities based on the risks. After 1991, the number one priority in the FBI was counterterrorism, number two was counterintelligence, and of course, number three was cyber because of the growth of cyber attacks at that time. So what I learned in the FBI was to see things in terms of risk, understand a threat, appreciate the capabilities of the threat actors, and then turn around and prioritise and your resources appropriately to reduce the threat either by remediation or mitigation. If you can create compensating controls around the threat, it reduces the actual risk. At the FBI I learned that you can accept some threats, others you just have to remove, and some you can create compensating controls around. 

What one piece of advice would you give to someone entering the industry?

I would tell them to one, stay humble, two, listen, and three, be willing to do things that you’re not comfortable with so that you can learn from the experience. There’s different reasons for learning. You should learn how to do something you’re not comfortable doing so that you appreciate the people who do it on a daily basis. You should learn to do something to understand the level of effort that it actually takes, so that when you ask people to do it as a leader, you know what they’re going to do for you and what they’re going to have to give up to get it done. 

To learn more about Mark Daniel’s experiences and insights, tune into Episode 21 of The Cyber Security Matters Podcast here. 

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Securing the Cloud in Cyber Security

Securing the Cloud is a major challenge across the Cyber Security industry. On Episode 19 of The Cyber Security Matters Podcast we spoke to Abhishek Singh, the Co-Founder and CEO of Araali Networks, about how Cyber Security professionals are navigating the growing challenges of keeping the Cloud secure. Abhishek has 25 years’ experience in Cyber Security, including a period in which he led a team to build a data centre scale platform to enable micro segmentation and security in a virtual machine environment. This wealth of experience gives him some great insights into the current issues around securing the Cloud. 

Could you explain what zero trust is and what the biggest problems are with implementing it?

Zero Trust has become a buzzword. Zero trust people say ‘trust nothing’, but zero trust is fundamentally a networking concept. That concept is actually very simple. Imagine it as a castle and moat problem, where you have a castle and a moat around it called a perimeter. Everything inside the castle is trusted. Everything outside the perimeter is untrusted. If you have to come into the castle, you come through a firewall, and then you are trusted. So it is a networking concept which relies on perimeter security and having an open interior.

The problem with that approach is that your perimeter has to be perfect. If there’s one bad guy coming in, you’re in trouble. If one Trojan horse seeps in, you’re in trouble. If you’re building a zero trust environment you have to keep your controls inside out. Even if your environment is not pristine, every resource has to defend itself. 

The Cloud is very zero trust friendly in that it denies access by default, so if you want to expose anything online you have to explicitly open it up. However, egress is open. And that is the problem with zero trust, it’s too hard to close down egress. So if someone is already inside, going out is free, and that is what attackers abuse. So in spite of Cloud being very different, very novel, very thought through and upfront, egress is open. And that is the fundamental problem. 

What do you see as the biggest challenges in securing the cloud itself?

The real question is, ‘is the Cloud more secure?’ That is the biggest thing that people need to understand, and there is no straight answer. Depending on who you ask, they will give you a different answer. Many people believe the Cloud is more secure because Amazon has done a lot of good work there, and other cloud providers have followed suit. But the real rub there is, it’s as secure as you make it. Security is a shared responsibility, and Amazon is very clear about it. They are saying ‘we have given you the tools to make it secure’, but they have not done your work for you. Amazon has not secured your stuff. Coming from an on-prem background, when you go into the Cloud where there are new paradigms, it’s very hard to fulfil your shared responsibility. If you have not done so, Cloud is not more secure. 

The other challenge is attackers. On-prem Windows is a fertile ground for attackers to be doing things. They have not exploited Cloud. At some point though, that’ll change. Things like solar wind supply chain attacks used to be science fiction, right? The cloud is like that – it’s waiting to explode. It’s not that it’s more secure – it’s just that attackers have not diverted their attention to it yet. They’re still trying to go after Windows workloads on prem. The moment they come to Cloud, there’s a lot to be had.

Why do you think businesses like Waze have had such success over the last few years?

So the reason Waze has been successful is because of simplicity. Security has been very cumbersome over the years. Orca was the first company who came out and said, ‘We’ll give you a Cloud account, and without any agents we’ll go and survey it and show you visibility’. The ease of use itself was very compelling. My problem with that approach is that by showing your Cloud position, you’re making yourself more vulnerable. I know I’m vulnerable. I did not need to see a picture to get that insight. The thing I need to know is how do I not become exploitable? How do I remediate my vulnerabilities? That is still a hard problem, because the Cloud is hard. It’s difficult, which is why it is vulnerable. Showing me my visibility is not helping me become less vulnerable. The thing we should focus on is remediation, and that’s the language of zero trust. The reason this became so popular is because of the ease of installation in a world where Cyber Security is hard to work with. Time to value is unspoken. 

To learn more about securing the Cloud, listen to Episode 19 of The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Tackling Talent Challenges in the Cyber Security Sector

As recruiters, we’re often faced with a number of challenges when it comes to sourcing talent in the cyber security sector. On Episode 18 of The Cyber Security Matters Podcast we spoke to Jake Bernardes, the CTO for Whistic, about his perspectives on the topic. Here are his insights: 

The reality is that there never has been a skill shortage in cyber security. That is completely fake news. The problems are actually between the hiring manager or hiring team and the candidate. And those issues are extensive. Let’s start with the kind of person that the hiring manager wants. Do they know what the key skills are that that person needs to have? Secondly, people are very bad at writing job descriptions. The next problem is that once you’ve written the job description it gets translated to a job ad. 

We all rely on recruitment in our business. Usually HR are filling in for recruitment functions, and they don’t understand what I’ve told them they’re hiring for. Do they know what I’ve actually asked for? Are they translating something which doesn’t make any sense? Are they adding things because they are standard requests, like ‘must be college or university educated’, ‘must have this qualification’ etc, when I actually don’t care as a hiring manager? The problem is when that person HR misinterprets my request and does not put the right spin on it when it goes out to market. 

There are then two more problems in that situation. Firstly, that description doesn’t make a lot of sense, and secondly it’s not focussing on the right keywords. We’re often having issues with the salary as well, because this is a high-paid field. We’re going out to recruiters who can’t fulfil a role where the requirements don’t make sense and the salary doesn’t work. It’s impossible to find someone that doesn’t exist, so it creates the illusion of a talent shortage.  

The flip side is that I don’t have a shortage of candidates. What I have is an inability to screen candidates properly because everyone has realised that there’s money in cyber so they’ve made their resume cyber orientated. If HR does the screening, they don’t have the competence to know what is or isn’t relevant. They often miss potential gems because the resumes are quite simple but have one really interesting line at the bottom. They just go and find an SRE or cybersecurity analyst. HR puts on a layer of nonsense that they think makes sense, including a salary banding which is completely unrealistic, then throws it to recruiters and hopes that they can turn carbon into diamonds. 

Our industry is a weird one. There are so many people who are very good, but on paper they shouldn’t be good. On paper they should never have even been in the interview. Standard education and experience doesn’t allow me to spot the people who are going to excel, but people’s passion projects do. And so I stand by my statement, there is no skill shortage here. There is a fundamental disconnect and a poor process between cybersecurity leaders and the candidates who are applying. Everything in between those two dots is broken currently.

To learn more about the talent challenges in the Cyber Security sector, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

Cyber Security and AI: Insights from David Stapleton

AI has been sweeping the internet for months since the release of Chat GPT 3. As the world looks at the implications of these powerful new AI models, the cyber security industry is no exception. On Episode 17 of The Cyber Security Matters Podcast we spoke to David Stapleton, the CISO at CyberGRX, who we met at the RSA conference. With over 20 years of experience in business administration, cyber security, privacy and risk management, David has a unique expertise that makes him the perfect person to share insights on the relationship between Cyber Security and AI. Read on to hear his thoughts! 

A lot of attention has been paid to AI – with good reason. I have this mental model where if my mother is aware of something that’s in my field, that’s when it’s really reached the public Zeitgeist. When she asked me a question about the security of AI, I knew it wasn’t a niche topic anymore. 

Artificial intelligence is an interesting phenomenon. Conceptually, it’s not that different from any other rapid technological advancement that we’ve had in the past. Anytime these things have come up, the same conversations have started to happen. With the advent of cloud there was a real fear that was sparked – particularly in the cybersecurity community – around the lack of control over those platforms. We had to trust other people to do the right thing. How do I present that risk to the board and get their approval for that? Maybe it’s a good financial decision, but we are introducing unnecessary risks. 

Another example of that may have been the movement towards Bring Your Own Device (BYOD) and allowing people to connect their personal devices to company networks and data. That sounds terrifying from a security perspective, but you can see how that opens the door to increased productivity, efficiency and flexibility. 

AI is not too dissimilar from that perspective, and we can see plenty of positive aspects to the utilisation of artificial intelligence. It’s a catalyst for productivity which could provide exposure to multiple different data points and bring together salient insights in a way that it’s hard for the human mind to do at that kind of a speed. It can also reduce costs, bring additional value to stakeholders and potentially help companies gain competitive advantages. 

Conversely, there are potential risks. It is such a new technology, and we’re still learning about how it works as we’re using it. There’s a lot of questions from a legal perspective about the ownership of the output of different AI technologies, particularly with the tools that produce audio visual outputs. The true implementation and impact of that isn’t going to be known until the courts have worked those details out for us. 

We’re in a position now where some companies have taken a look at AI and said, ‘We don’t know enough about this, but we feel the risk is too great, so we’re going to prohibit the utilisation of these tools.’ Other companies are taking the exact opposite approach: ‘We also don’t know a whole lot about this, but we’re going to pretend this problem doesn’t exist until things work themselves out.’ 

At CyberGRX we’re taking a middle of the road approach where we’re treating AI models as another third party vendor that we’re using for work purposes. We’re going to share access or data with that tool, but we need to analyse it from a security risk and legal risk perspective before we approve its utilisation. That’s a fairly long-winded way of saying that there are amazing opportunities for AI but there are risks. 

We’ve already seen threat actors starting to use artificial intelligence to beef up their capabilities. You could understand logically how artificial intelligence gives a fledgling or would-be threat actor the ability to get in the game and take action sooner than they otherwise would be able to. When Chat GPT first was released to the public, the very first thing that I put into it was ‘Write a keylogger in Python’. That’s a little piece of malware that will log your keystrokes and collect things like passwords or credentials. It just did it. It was there on the screen as a perfectly legitimate piece of software. Since then they’ve tightened the controls, but there was a time when someone with bad intent could start producing different types of malicious software without even learning to code.

To learn more about the uses of AI in Cyber Security, tune into The Cyber Security Matters Podcast here

We sit down regularly with some of the biggest names in our industry, we dedicate our podcast to the stories of leaders in the technologies industries that bring us closer together. Follow the link here to see some of our latest episodes and don’t forget to subscribe.     

RSAC: Insights, Community and Cybersecurity Trends

Spring is blossoming in San Francisco, the highly anticipated #RSAC2023 commences attracting leaders and companies from around the world.

Being my first conference, I embarked on this journey with a mix of excitement, nerves, and curiosity.

The big takeaways from the conference were the valuable insights into the cybersecurity industry, the strong sense of community and the hot topics of investments, the impact of AI and talent shortages. Additionally, we had the opportunity to explore the vibrant food scene of San Francisco, which added a cultural touch to the conference experience.

Grand Opening and Impressive Booths

The conference kicked off with great anticipation, as attendees gathered in the entrance hall, the atmosphere was electric, and the buzz of excitement was palpable. As the doors opened, a polite stampede of cybersecurity enthusiasts filled Moscone South Hall. The sight of numerous booths was awe-inspiring, with companies investing substantial resources to impress and display the immense potential of the cyber security world with exhibits highlighting the industry’s advancements and potential.

Networking calls and conversations up to this point had evolved around RSA Conference, emphasising its values as a place to connect and meet face-to-face.

Community – Diversity & Inclusion

The most profound takeaway from my first RSAC was the vibrant and supportive community within the cybersecurity industry.

As a newcomer, the community came across as surprisingly friendly and collaborative.

I had the privilege of attending the Women in CyberSecurity (WiCys) drinks event, where representatives from Microsoft, Amazon and Google gathered to promote diversity, the motto “not done yet” resonated strongly emphasising the importance of the continuous effort needed to enhance diversity in this tech space.

The next morning, I attended the Women’s in Cyber breakfast, featuring a panel discussion with founders, CEOs and CISOs. The conversation revolved around the challenges faced by successful women in maintaining work-life balance. It was inspiring to witness the support within the community, with ideas exchanged freely, fostering growth and empowerment.

Insights and trends

Apart from the community aspect, RSA Conference 2023 offered valuable insights into trends and concerns.

Investments

One notable takeaway was the significant investment in the Cybersecurity sector. Funding for Cybersecurity start-ups increased from $2.4 billion in Q4 2022 to nearly $2.7 billion in Q1 2023, underscoring the industry’s growth and the recognition of its importance in the digital landscape.

AI – Changing the landscape.

Discussions throughout the conference highlighted the transformative role of artificial intelligence in the Cyber security industry. AI technologies are reshaping the landscape, influencing threat detection, incident response, and overall security operations. The integration of AI into cybersecurity practices has become indispensable for organisations to stay ahead of evolving threats.

Talent shortage and calls for solutions.

Addressing the shortage of talent has become a top priority for organisations with discussions focussing on strategies to attract and retain skilled professionals. Collaborative efforts are necessary to bridge the talent gap and nurture a diverse and competent cybersecurity workforce.

Amid networking and business meetings, we took the opportunity to explore San Francisco’s renowned food scene, indulging in the famous Clam Chowder, Oysters, and the Buena Vista Irish coffee.

While RSAC is over, another key takeaway is that the fight is not over, so we look forward to next year to witness the continued growth in the industry and learn new and innovative ways to disrupt cybercrime.