Employees attempting to use a company device to access Chinese tech startup DeepSeek’s wildly popular artificial intelligence app could inadvertently be exposing their organization to threats such as cyberespionage, experts warned.
A major red flag, they say, is DeepSeek’s terms of service, which states that user data is stored on servers in China and governed under Chinese law, which mandates cooperation with the country’s intelligence agencies.
The Chinese government has long been accused of engaging in espionage campaigns to advance goals such as stealing intellectual property from Western organizations or gathering geopolitical intelligence. It has consistently denied the allegations.
“China is very good at mining data,” Andrew Grealy, head of Armis Labs, a division of San Francisco, California-based cybersecurity firm Armis Inc., said in an interview. “Anything that’s in the terabytes is not an issue for them.”
DeepSeek attracted global attention after releasing an open-source AI model that it claims was built at a low cost compared with U.S. rivals like ChatGPT.
The news rattled the technology world last week, prompting questions about America’s ability to maintain a position of AI dominance on the world stage. President Donald Trump called the development a “wake-up call.”
Amid the frenzy, Microsoft announced that it was making DeepSeek’s latest AI model available on “a trusted, scalable, and enterprise-ready platform.” Other tech giants, including Amazon Web Services, have made similar moves.
Meanwhile, White House press secretary Karoline Leavitt said last week that U.S. officials are examining the national security implications of DeepSeek’s app, an AI chatbot. Italy and Taiwan have banned it.
In addition, the app was blocked by hundreds of Armis customers last week as the tool was widely publicized and rapidly gained popularity, according to Grealy.
A similar pattern was observed last week by Netskope, a Santa Clara, California-headquartered cybersecurity company, Ray Canzanese, director of Netskope’s Threat Labs division, told CFO Dive.
“We saw almost half of our customers worldwide trying out DeepSeek, and the other half more or less blocking their users from trying it out,” he said.
Canzanese said that some Netskope customers automatically block unapproved apps.
“The risk is that your employees are going to fire up the app and start putting sensitive data in there — customer data, source code, regulated data, intellectual property,” he said. “That’s the risk of DeepSeek, that’s the risk really with any of these generative AI apps.”
Besides questions related to the government of China, DeepSeek has prompted other concerns.
The app’s protections against data leaks as well as hallucinations — where an AI model asserts inaccurate statements — are “notably weak,” according to Ophir Dror, co-founder of cybersecurity firm Lasso Security.
“We also observed broader security risks beyond its origin and supply chain, including suspicious behaviors that could pose a threat to organizations and government agencies,” he said in an email. “Given these findings, we strongly advise against using these models in critical workflows or sharing any sensitive information with them.”
New York-based cybersecurity firm Wiz said last week it discovered that DeepSeek had accidentally left more than a million lines of data available unsecured. The database contained a “significant volume of chat history, backend data and sensitive information,” Wiz security researcher Gal Nagli said in a blog post at the time.
Another security investigation, conducted by Cisco, found that DeepSeek’s AI model exhibited a 100% attack success rate, failing to block a single harmful prompt.
“It’s very tempting to jump into using DeepSeek, but there are a lot of risks involved,” Melissa Ruzzi, director of AI at security firm AppOmni, said in an interview.
A DeepSeek spokesperson could not immediately be reached for comment.
Editor’s note: This story has been updated with comments from Lasso Security.
Source link