Saturday, July 27, 2024

HomeCyberSecurityThe role for AI in cybersecurity

The role for AI in cybersecurity

AI has hit an inflection point. For years it lingered beneath the surface, useful to many technologies and innovations, but it was controlled by engineers and computer scientists. Machine-driven tools improved cybersecurity systems by allowing AI to handle the most tedious, repetitive tasks. 

Then came generative AI, with OpenAI’s ChatGPT and other chatbots. 

Now AI is available to everyone, those with both good and bad intent. While the adoption of AI language models is an exciting push forward, it has also highlighted the limitations of the technology, according to Vijay Bolina, CISO with Google DeepMind, which researches and produces AI technology. 

We’re seeing things like distributional bias and AI hallucinations, Bolina told an audience at RSA Conference 2023 in San Francisco in April. This will force organizations to come to terms with the ethical standards of AI, and the lack of a responsible or trustworthy AI creates new security risks. 

As organizations learn more about the ethics surrounding generative AI and how the technology will impact everything from customer interaction to business operations and cybersecurity, there is still a lot of uncertainty around what the overall impact will be today and in the future.

Merging of ethics and security

There is a misconception that when AI is sharing incorrect information, whether purposefully or by accident, it is automatically a security problem. But that’s not the case.

Ethics and security aren’t the same, Rumman Chowdhury, co-founder of Bias Buccaneers, told an audience at RSA. 

“There’s one very specific distinction: Most of cybersecurity thinks about malicious actors, but a lot of irresponsible AI is built around unintended consequences and unintentionally implementing bad things,” said Chowdhury.

Disinformation is a good example of this. Bad actors will create a malicious deepfake — and a security problem — but if people are sharing them because they believe the information, now you’ve moved into an ethics problem. 

“You have to address both problems,” said Chowdhury. An ethics approach will focus on the context on how something is used, but the security approach is meant to flag any potential problem. 

AI red teams

Organizations regularly use red and blue teams to help find points of weakness in the network infrastructure. Red teams go on the offensive and simulate attacks, while the blue team’s job is to defend the organization’s assets from these attacks. 

Organizations like Microsoft, Facebook and Google now utilize AI red teams, and the trend is gaining popularity as cybersecurity analysts turn to AI red teams to investigate vulnerabilities in AI systems. They are useful for anyone who is working with large computational models or general purpose AI systems that have access to multiple applications, said Bolina. 

“It’s an important way to challenge some of the safety and security controls that we have, using an adversarial mindset,” said Bolina. 

The red teams should have a mix of cybersecurity and machine learning backgrounds to work together to understand what vulnerabilities in AI would look like. The problem with building an AI red team is the lack of skilled AI cybersecurity professionals. 

And yet, AI — or more specifically, machine learning — can help solve the talent shortage, according to Vasu Jakkal, corporate vice president with Microsoft Security Business and speaker at RSA. 

Generative AI can become an ally for new security professionals who may otherwise feel overwhelmed. For more seasoned security analysts, generative AI offers time to develop their skills through automation of repetitive tasks. They can integrate their experience and expertise into the AI tool, essentially sharing those skills with someone who lacks them. 


Source link

Bookmark (0)
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img