GenAI Causes Cyber Security Fears Among Experts

Multiple cybersecurity experts have warned of the risks associated with AI-generated code, saying that rapid implementation of the new technology could weaken existing security infrastructure.

Aleksander HougenJackie Leavitt

Written by Aleksander Hougen (Co-Chief Editor)

Reviewed by Jackie Leavitt (Co-Chief Editor)

Last Updated:

genai llm

A recent paper published last month by researchers at the University of Texas at San Antonio (UTSA) found that AI-generated code has a significant tendency to hallucinate package libraries that do not exist. 

Analyzing over 500,000 code samples generated by 16 different AI models, the study concludes that at least 5.2% of package dependencies in code generated by commercial AI models are complete hallucinations — meaning they don’t exist. This number shoots up to 21.7% for open-source models.

Although these errors don’t stop the code from functioning, it presents a grave security threat, as any malicious actor could simply publish a new package with the same name as the hallucination, causing the software to import whatever code is in this newly published package, which could consist of anything from malware to illicit data collection.

Making the problem worse is that as many as 43% of the hallucinated package dependencies were consistent across queries, meaning that they are potentially predictable enough that an attacker could easily exploit them by doing their own testing with AI-generated code.

Theoretically, this problem could be solved by proper verification and testing of code by human developers. However, as tech companies are considering moving more and more of their entry-level programming tasks to AI, it seems likely that the same cost-cutting incentives will also lead to less resources spent on exactly this kind of code verification and testing.

Tech CEOs Warn of Danger to Security Infrastructure

It’s not just researchers who are concerned about the security implications of generative AI, but also tech CEOs. Speaking at this year’s RSA Conference, the CEOs of Palo Alto Networks and SentinelOne both raised concerns about the implications of artificial intelligence in the SaaS space.

In his keynote, Palo Alto Networks CEO Nikesh Arora talked about how embedding generative AI — artificial intelligence that can create its own new content, such as images, video, audio and code — across all SaaS products presents a grave risk to security. 

“The whole idea of security will change,” Arora said. “[We’ll have to] constantly test those models, test those applications, and make sure they’re not going to go rogue on you in some way, shape or form.”

SentinelOne Co-Founder and CEO Tomer Weingarten focused more on the underlying infrastructure, noting that current cybersecurity infrastructure is far from being able to handle the broad application of generative AI. 

“Our infrastructure is still riddled with the same issues that have plagued us for years,” Weingarten said. “We’re now onboarding an incredibly powerful technology [onto] those same foundations, in the form of AI.”

Adapt or Die

The clear conclusion from both the recent study and the CEO keynotes is that the entire cybersecurity field needs to adapt — and rapidly — if it wants to get ahead of the new threats posed by AI-generated code. 

In just a few years since broad adoption, the technology is already breaking security paradigms and putting already embattled security infrastructure under increased strain and risk.

How the field moves from here is impossible to predict, but one thing is clear: cybersecurity practices have to evolve to meet these new challenges, or face increasing breaches and incidents over the coming years.

↑ Top