
Will Enterprises Adopt DeepSeek?
www.informationweek.com
Lisa Morgan, Freelance WriterFebruary 27, 202511 Min ReadGK Images via Alamy StockDeepSeek recently bested OpenAI and other companies, including Amazon and Google, when it comes to LLM efficiency. Most notably, the R1 and V3 models are disrupting LLM economics.According to Mike Gualtieri, VP and principal analyst at Forrester, many enterprises have been using Meta Llama for an internal project, so theyre likely pleased that theres a high-performing model available that is open source and free.From a development and experimental standpoint, companies are going to be able to duplicate this exactly because they published the research on the optimization. It kind of triggers other companies to think, maybe in a different way, says Gualtieri. I dont think that DeepSeek is necessarily going to have a lock on the cost of training a model and where it can run. I think were going to see other AI models follow suit.DeepSeek has taken advantage of existing methods including:Distillation, which transfers knowledge from larger teacher models to smaller student models, reducing the size requiredFloating Point 8 (FP8), which minimizes compute resources and memory utilizationReinforcement learningSupervised fine tuning (SFT), which improves a pre-trained model's performance by training it on a labeled datasetAccording to Adnan Masood, chief AI architect at digital transformation services company UST, the techniques have been open sourced by US labs for years. Whats different is DeepSeeks very effective pipeline.Related:Adnan Masood, USTAdnan Masood, USTBefore, we had to just throw GPUs at problems, [which costs] millions and millions of dollars, but now we have this cost and this efficiency, says Masood. The training cost is under $6 million, which is completely challenging this whole assumption that you need a billion-dollar compute budget to build and train these models.Do Enterprises Want To Adopt It?In a word, yes, with a few caveats.Were already seeing adoption, though it varies based on an organizations AI maturity. AI-driven startups that Valdi and Storj engage with are integrating DeepSeek into their evaluation pipelines, experimenting with its architecture to assess performance gains, says Karl Mozurkewich, senior principal architect at Valdi.ai, a Storj company. More mature enterprises we work with are taking a different approach -- deploying private instances of DeepSeek to maintain data control while fine-tuning and running inference operations. Its open-source nature, performance efficiency and flexibility make it an attractive option for companies looking to optimize AI strategies.Related:And the economics are hard to ignore.DeepSeek is a game-changer for generative AI efficiency. [It] scores an 89 based on MMLU, GPQA, math and human evaluation tests -- the same as OpenAI o1-mini -- but for 85% lower cost per token of usage. The price-to-performance-quality ratio has been massively improved in GenAI due to DeepSeeks approach, says Mozurkewich. Right now, the market continues to be compute-constrained. Advances like DeepSeek will force many companies to have spare compute capacity to test [an] innovation when it is released. Most companies with AI strategies already have their committed GPU capacity fully utilized.Dan Yelle, chief data and analytics officer at small business lending company Credibly, says given that the AI landscape evolving at lightning speed, enterprises may hesitate to adopt DeepSeek over the medium term.[B]y prioritizing innovation over immediate large-scale profits, DeepSeek may force other AI leaders to accept lower margins and to turn their focus to improving efficiency in model training and execution in order to remain competitive, says Yelle. As these pressures reshape the AI market, and it reaches a new equilibrium, I think performance differentiation will again become a bigger factor in which models an enterprise will adopt.Related:He also says differentiation may increasingly be based on factors beyond standard benchmark metrics, however.It could become more about identifying models that excel in specialized tasks that an enterprise cares about, or about platforms that most effectively enable fine-tuning with proprietary data, says Yelle. This shift towards task specificity and customization will likely redefine how enterprises choose their AI models.But the excitement should be tempered with caution.Large language models (LLMs) like ChatGPT and DeepSeek-V3 do a number of things, many of which may not be applicable to enterprise environments, yet. While DeepSeek is currently driving conversation given its ties to China, at this stage, the question is less about whether DeepSeek is the right product, but rather is AI a beneficial capability to leverage given the risks it may carry, says Nathan Fisher, managing director at global professional services firm StoneTurn and former special agent with the FBI. There is concern in this space regarding privacy, data security, and copyright issues. Its likely many organizations would implement AI technology, especially LLMs, where it might serve to enhance efficiency, security, and quality. However, it is reasonable most will not fully commit or implement until some of these issues are decided.Be Aware of RisksLower cost and higher efficiency need to be weighed against potential security and compliance issues.The CIOs and leaders Ive talked to have been contemplating how to balance the temptation of a cheaper, high performing AI versus the potential security and compliance tradeoff. This is a risk-benefit calculation, says USTs Masood. [Theyre] also debating about backdooring the model [where] you have a secret trigger which causes malicious activity, like [outputting] sensitive data, or [executing] unauthorized actions. These are well known attacks on large language models.Unlike working with Azure or AWS that provide regulatory compliance, DeepSeek does not have the same guarantees. And the implementation matters. For example, one could use a hosted model and APIs or self-host. Masood recommends the latter.[T]he biggest benefit you have with a self-hosted model is that you don't have to rely on the third party, says Masood. So, the first thing, if it's hosted in an adversarial environment, and you try to run it, then essentially, you're copying and pasting into that model, it's all going on somebody else's server, and this applies to any LLM you're using in the cloud. Are they going to keep your data and prompt and use it to train their models? Are they going to use it for some adversarial perspective? We don't know.In a self-hosted environment, enterprises have the benefits of continuous logging and monitoring, and the concept of least privilege. Its less risky because PII stays on premises.If you allow limited usage within the company, then you must have security and monitoring in place, like access control, blocking, and sandboxing for the public DeepSeek interface, says Masood. If its a private DeepSeek interface, then you sandbox the model and make sure that you log all the queries, and everything gets monitored in that case. And I think the biggest challenge is bias oversight. Every model has built-in bias based on the training data, so it becomes another element in corporate policy to ensure that none of those biases seep into your downstream use cases.Security firm Qualsys recently published DeepSeek R-1 testing results, and there were more test failures than successes. The KB Analysis prompted the target LLM with questions across 16 categories and evaluates the responses. Those responses were assessed for vulnerabilities, ethical concerns, and legal risks.Qualsys also conducted jailbreak testing, which bypasses built-in safety mechanisms to identify vulnerabilities. In the report, Qualsys notes, These vulnerabilities can result in harmful outputs, including instructions for illegal activities, misinformation, privacy violations, and unethical content. Successful jailbreaks expose weaknesses in AI alignment and present serious security risks, particularly in enterprise and regulatory settings. The test involved 885 attacks using 18 jailbreak types. It failed 58% of the attacks, demonstrating significant susceptibility to adversarial manipulation.Amiram Shachar, co-founder and CEO of cloud security company Upwind, doesnt expect significant enterprise adoption, largely because DeepSeek is a Chinese company with direct access to a vast trove of user data. He also believes shadow IT will likely surge as employees use it without approval.Organizations must enforce strong device management policies to limit unauthorized app usage on both corporate and personal devices with sensitive data access. Otherwise, employees may unknowingly expose critical information through interactions with foreign-operated AI tools like DeepSeek, says Shachar. To protect their systems, enterprises should prioritize AI vendors that demonstrate strong data protection protocols, regulatory compliance, and the ability to prevent data leaks, like AWS with their Bedrock service. At the same time, they must build governance frameworks around AI use, balancing security and innovation. Employees need education on the risks associated with shadow IT, especially when foreign platforms are involved.Dan Lohrmann, field CISO at digital services and solutions provider Presidio, says enterprises will not adopt DeepSeek, because their data is stored in China. In addition, some governments and defense organizations have already banned DeepSeek use, and more will follow.I recommend that enterprises proceed with caution on DeepSeek. Any research or formally sanctioned testing should be done on separate networks that are built upon secure processes and procedures, says Lohrmann. Exceptions may include research organizations, such as universities, or others who are experimenting with new AI options with non-sensitive data.For enterprises, Lohrmann believes DeepSeek is a large risk.There are functional risks, operational risks, legal risks, and resource risks to companies and governments. Lawmakers will largely treat this situation [like] TikTok and other apps that house their data in China, says Lohrmann. However, staff are looking for innovative solutions, so if you dont offer GenAI alternatives that work well and keep the data secure, they will go elsewhere and take matters into their own hands. Bottom line, if you are going to say no to DeepSeek, youd better offer a yes to workable alternatives that are secure.Sumit Johar, CIO financial automation software company BlackLine, says at a minimum, enterprises must have visibility into how their employees are using the publicly available AI models and if they are sharing sensitive data with these models.Once they see the trend among employees, they may want to put additional controls to allow or block certain AI models in line with their AI strategy, says Johar. Many organizations have deployed their own chat-based AI agents for employees, that are deployed internally and substitute for the publicly available models. The key is to make sure they are not blocking the learning for their employees but helping them avoid mistakes that can cost enterprises in the long term.Unprecedented volatility in the AI space has already convinced enterprises that their AI strategy shouldnt rely on only one provider.Theyll expect solution providers to provide the flexibility to pick and choose the AI models of their choice in a way that doesnt require intrusive changes to the basic design, says Johar. It also means that the risk of rogue or unsanctioned AI use will continue to rise, and they need to be more vigilant about the risk.Proceed With Caution at a MinimumStoneTurns Fisher says there are two aspects to consider in terms of policy. First, are AI technology and LLMs generally appropriate for the individual company, its operations, its industry, etc?Based on this, companies need to monitor for and/or restrict employee usage if it is determined to be inappropriate for work product.Second, is the use of DeepSeek-V3 specifically approved for use on company devices?Nathan Fisher, StoneTurnNathan Fisher, StoneTurnAs a practitioner of national security and cybersecurity investigations, I would cautiously suggest it is premature to allow for the use of DeepSeek-V3 on company devices and would recommend establishing policy prohibiting such until the actual and potential security risks of DeepSeek-V3 can be further independently investigated and reviewed, says Fisher.While it is short sighted and overly alarmist to prescribe that all China-produced tech products should be categorically off the table, Fisher says there is enough precedent to justify the need for due diligence review and scrutiny of engineering before something like DeepSeek is approved and adopted by US companies.Its [fair] to suspect, lacking further analysis, that DeepSeek-V3 may be capable of collecting all manner of data that may make companies, customers, and shareholders very uncomfortable, and perhaps vulnerable to third parties seeking to disrupt their business.Reporting around DeepSeeks security flaws over recent weeks are enough to raise alarm bells for organizations that may be considering what AI platform best fits their needs.There are proposals in motion in the US government to ban DeepSeek from government-owned devices. Globally, there are already bans in place in certain jurisdictions regarding DeepSeek-V3s use. As it related to AI more broadly, Fisher says lawmakers need to first solve the questions around data privacy and copyright infringement concerns. The US government needs to make determinations on what, if any, regulation will be applied to AI. Those issues surpass questions about DeepSeek specifically and will have much greater overall impact in this space.Stay informed. Pay close attention to developments in terms of regulation and privacy considerations. Big issues need to be addressed, and so far,the technology is advancing and being adopted much faster and more broadly than these concerns have been addressed or resolved, says Fisher. Proceed with caution in adopting emerging technology without significant internal review and discussion. Understand your business, what laws and regulations may be applied to your use of this technology, and what technical risk these tools may invite into your network environments if not properly vetted.And finally, a recent Gartner research note sums up guidance: Dont overreact, and reassess DeepSeeks achievement with caution.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
0 Comments
·0 Shares
·62 Views