Grow Your Business and Join MarketWorld Marketplace and Create Your Own Store front

Wednesday, July 3, 2024

Grow Your Business and Join MarketWorld Marketplace and Create Your Own Store front

HomeCyberSecurityIs the cybersecurity industry ready for AI?

Is the cybersecurity industry ready for AI?

AI isn’t new to cybersecurity — most automated security tools rely on AI and ML in some capacity — but generative AI has everyone talking and worried. 

If cybersecurity professionals have yet to address the security implications around generative AI, they are already behind. 

“The train has already left the station,” said Patrick Harr, CEO of SlashNext, in a conversation at RSA Conference 2024 in San Francisco. 

AI-generated threats have already impacted three-quarters of organizations, yet 60% admitted they aren’t prepared to handle AI-based attacks, according to a study conducted by Darktrace.

AI-powered cyberattacks are revealing the gaps in the cybersecurity talent availability. Organizations are already concerned about the skills gap, especially in areas like cloud computing, zero trust implementation, and AI/ML capabilities. 

With the growing threat AI poses, cybersecurity teams no longer have the luxury of waiting a few years to fill those talent gaps, Clar Rosso, CEO with ISC2 told an RSAC audience. 

Right now, 41% of cybersecurity professionals have little to no experience in securing AI and 21% said they don’t know enough about AI to mitigate concerns, according to ISC2 research

It’s no wonder, then, that these same professionals said that by 2025, AI will be the industry’s biggest challenge.

Why the security industry isn’t ready, yet

Organizations have used AI to detect cyber threats for years. But what has changed the conversation is generative AI. 

For the first time, thinking about AI moves beyond the corporate network and beyond the threat actor; it now includes the customer. 

As organizations rely on AI for consumer interaction through tools like chatbots, security teams have to rethink their approach to security detection and incident response that centers around interactions between AI and a third-party end user. 

The problem is governance around generative AI. Cybersecurity teams — and organizations overall — don’t have a clear understanding on what data is being trained on AI, who has access to these training modules and how AI fits into compliance. 

In the past, if a third party asked for information about the company that may have been deemed sensitive, no one would have given it out; it would have been a potential security risk. Now that information is built into the AI response model, but who is responsible for the governance of that information is undefined. 

As cybersecurity teams focus on how to thwart threat actors, they are missing the risks around the data they are sharing willingly.

“From a security standpoint, to safely adopt a technology, we need to understand what the ML model is, how is it connected to the data, is it pretrained, is it continuously learning, how do you drive importance?” said Nicole Carignan, VP of strategic cyber AI at Darktrace, during a conversation at RSAC. 

Building the security team’s expertise

It’s important to remember that generative AI is only one type of AI and, yes, its use cases are finite. Knowing what the AI tools are good at will help security teams begin to build skills and tools to address the AI threat landscape.

However, organizations need to be realistic. The skills gap isn’t going to magically shrink in two or five years just because the need is there. 

As the security team catches up with the skills they need, managed service providers can step in. The benefit to using an MSP to manage AI security is the ability to see beyond a single organization’s network. They can observe how AI threats are manipulated in different environments.

But organizations will still want to train their internal AI systems. In this situation, it is best for the security team to start in a sandbox using synthetic data, said Narayana Pappu, CEO at Zendata. This will allow security practitioners to test their AI systems with safe data. 

No matter the skills inhouse, eventually, managing AI threats will come down to how AI is used in security toolkits. Security professionals will need to rely on AI to help implement basic security hygiene practices and add layers of governance to ensure compliance regulations are met. 

“We still have a lot to learn about AI. It’s our job to educate ourselves,” said Rosso.


Source link

Bookmark (0)
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img