Microsoft brings a DeepSeek model to its cloud
techcrunch.com
Microsofts close partner and collaborator, OpenAI, might be suggesting that DeepSeek stole its IP and violated its terms of service. But Microsoft still wants DeepSeeks shiny new models on its cloud platform. Microsoft today announced that R1, DeepSeeks so-called reasoning model, is available on Azure AI Foundry service, Microsofts platform that brings together a number of AI services for enterprises under a single banner. In a blog post, Microsoft said that the version of R1 on Azure AI Foundry has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks.In the near future, Microsoft said, customers will be able to use distilled flavors of R1 to run locally on Copilot+ PCs, Microsofts brand of Windows hardware that meets certain AI readiness requirements. As we continue expanding the model catalog in Azure AI Foundry, were excited to see how developers and enterprises leverage [] R1 to tackle real-world challenges and deliver transformative experiences, continued Microsoft in the post.The addition of R1 to Microsofts cloud services is a curious one, considering that Microsoft reportedly initiated a probe into DeepSeeks potential abuse of its and OpenAIs services. According to security researchers working for Microsoft, DeepSeek may have exfiltrated a large amount of data using OpenAIs API in the fall of 2024. Microsoft, which also happens to be OpenAIs largest shareholder, notified OpenAI of the suspicious activity, per Bloomberg. But R1 is the talk of the town, and Microsoft may have been persuaded to bring it into its cloud fold while it still holds allure. Unclear is whether Microsoft made any modifications to the model to improve its accuracy and combat its censorship. According to a test by information-reliability organization NewsGuard, R1 provides inaccurate answers or non-answers 83% of the time when asked about news-related topics. A separate test found that R1 refuses to answer 85% of prompts related to China, possibly a consequence of the government censorship to which AI models developed in the country are subject.
0 Commentarios ·0 Acciones ·47 Views