Participate in the new and weekly newsletter for the latest updates and exclusive content on the leading AI coverage in the industry. learn more
DeepSeek and their R1 models are not wasted time to rewrite cyber security AI rules in real time, from startups to enterprise providers, this month, to enterprise providers who pilot integration.
R1 is developed in China and is based on pure reinforced learning (RL) without monitoring fine adjustments. It is also an open source, and is immediately attractive to almost all cyber security startups that are open source architecture, development, and deployed all -in.
With a $ 6.5 million investment in the model of DeepSeek, a performance that matches O1-1217 of Openai. Deepseek’s price setting sets new standards Compared to the Openai model, the cost per 100 tokens is significantly lower. The Deep Seek-Reasoner model claims a $ 2.19 output token per million, but the O1 model of OPENAI charges $ 60. The price difference and its open source architecture have attracted attention as CIO, CISO, cyber security startups, and enterprise software providers.
(Interestingly, Openai claims DeepSeek used the model The company says that the company excluded data through multiple queries to train R1 and other models. )
AI breakthrough with hidden risk that keeps appearing
The center of model security and reliability is whether censorship and secret bias are incorporated into the model core, the first director of the Ministry of Land Security (DHS) cyber security and infrastructure security institution One Crisk Lebs warns.CISA), And recently, the Chief Officer of the Supreme Public Policy Centinelone。
“Censorship of content critical to the Chinese Communist Party (CCP) may be” burned in “for the model, and may have objective results, and may cause design functions. There is, “he said. China’s AI model “Political Robotomy” may support the development and global diffusion of the US -based open source AI model. “
He pointed out that as the debate progresses, democratization of US products should increase overseas US software power and reduce the spread of China’s censorship worldwide. “The low -cost and simple computing fundamentals of R1 is questioning the effectiveness of the US strategy to take in cutting -edge western technology, including GPUs,” he said. 。 “In a sense, they are really” less “. “
Merritt Baer, CISO AT Reco Several security startup advisors tell the venture beat: [DeepSeek-R1] A wider western Internet data controlled by Internet sources (or is likely to be explained that China’s control or firewall is likely to be insufficient) may be one solution to some of the concerns. yeah. I’m not very worried about what is clear, such as censorship of President XI’s criticism. The fact that the model creator is part of the system of the Chinese impact campaign is a troublesome factor, but it is not the only factor that should be considered when selecting a model. “
Deepseek’s training was sold in China, but it was sold in the NVIDIA H800 GPU, which lacks the power of the more advanced H100 and the power of the A100 processor, so DeepSeek executes the model more hardware. Democratized to an organization with plenty of time. Estimate and invoice of materials describing how to build a $ 6,000 system that can run R1 is growing throughout social media.
R1 and followon models are built to avoid US sanctions in the United States. This is the point that Crebus is regarded as a direct issue for the US AI strategy.
ENKRYPT AI’s Deepseek-R1 Red Teaming Report It can be seen that the model is vulnerable to generating “harmful, toxic, bias, CBRN, and non -secure code output”. The RED team continues as follows. “It may be suitable for a narrow scope, but the model has a considerable vulnerability in the operation risk and security risks, as described in our methodology. When using a model, it is strongly recommended to implement the easing. ”
The ENKRYPT AI RED team has three times the prejudice of Deepseek-R1, four times more vulnerable to the generation of codes that are more unstable than O1 in Open AI, four times toxic to GPT-4O. I also found something. The RED team has also discovered that the model is more likely to create a 11x harmful output than the O1 of the open AI.
Know the risk of privacy and security before sharing data
Deepseek’s mobile apps are currently dominating global downloads, and in the web version, record tropic is found, and all personal data is shared on both platforms captured on Chinese servers. Enterprise is considering running a model on an isolated server to reduce the threat. Venturebeat learned about pilots operated by commoditized hardware throughout the United States.
Data shared in mobile apps and web apps can be accessed from the IntellignC report in China.
The Chinese National Information Law states that companies must “support, support, and cooperate with the state of the state of the state. This practice is very wide and such a threat to US companies and citizens. Ministry of Land Security A was released Data Security Business Advisory。 For these risks The US Navy issued an order Prohibit Deepseek-R1 from work-related systems, tasks, or projects.
An organization that quickly piloted a new model is all -in with open source, and the test system is separated from the internet network and the Internet. The goal is to make sure that all data is private while running a benchmark in a specific use case. Companes and platforms such as double curved labs will safely deploy R1 in US or European data centers and maintain sensitive information so that China’s regulations are unreasonable. See the excellent summary of this model of the model.
STARTUP CEO, Itamar Golan Quick security In addition, the top 10 core members of Owasp’s large language model (LLMS) claim that the risk of data privacy exceeds only DeepSeek. “The organization should not provide sensitive data to Openai and other US -based model providers,” he pointed out. “If data flow to China is an important national security concern, the US government subsidizes domestic AI providers to maintain competitive prices and market balance. I want to intervene through.
Recognizing the security flaws of R1, a few days after the model was introduced, we added quick support to inspect the traffic generated by the Deepseek-R1 query.
Cloud Security Providers Wiz’S during DeepSeek’s public infrastructure Research team With more than 1 million logs with chat history, secret key, and backend details, I found a clickhouse database on the Internet. Potential privilege escalations have become quicker because authentication is not enabled for the database.
The discovery of Wiz’s research emphasizes the risk of promptly adopting unbounded AI services based on large -scale enhanced security frameworks. Wiz has been responsibly clarified the violation and urged DeepSeek to lock down the database immediately. Deepseek’s first monitoring emphasizes three core lessons to keep in mind when the AI provider introduces a new model.
First, before starting the model, run the red team and thoroughly test AI infrastructure security. Second, assuming that you have a low privilege access, adopt the concept of zero -trade, the infrastructure has already been infringed, and you do not trust the multi -domain connection of the system and cloud platform as a whole. Third, security teams and AI engineers cooperate to have a way for models to protect confidential data.
Deepseek creates security paradox
Crebs warned how the real danger of the model was made, not just the place where it was created. Deepseek-R1 is a by-product of the Chinese technology industry, where the purpose of the private sector and national information is inseparable. As Creves explains, the concept of buying the bias and filtering mechanism is already “baked” at the basic level, so the concept of local introduction and execution locally as a protective agent is an illusion.
Cyber security and national security leaders are the first of many models of outstanding performances and low costs found from China and other national states, which Deepseek-R1 performs control of all collected data. I agree with something.
Conclusion: If the open source has long been regarded as the power of the democratization of software for a long time, the Paradox created by this model indicates that if the national state chooses, the open source can be freely used as a weapon.