• Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding

    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks. 
    Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost.
    Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases. 
    Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows. 

    Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows.

    In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities. 
    vides valuable insights for the community.

    Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation
    #mistral #introduces #codestral #embed #highperformance
    Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding
    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks.  Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost. Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases.  Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows.  Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows. In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities.  vides valuable insights for the community. Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation #mistral #introduces #codestral #embed #highperformance
    WWW.MARKTECHPOST.COM
    Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding
    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks.  Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost. Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases.  Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows.  Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at $0.15 per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows. In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities.  vides valuable insights for the community. Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation
    0 Yorumlar 0 hisse senetleri
  • Mistral AI launches code embedding model, claims edge over OpenAI and Cohere

    French startup Mistral AI on Wednesday unveiled Codestral Embed, its first code-specific embedding model, claiming it outperforms rival offerings from OpenAI, Cohere, and Voyage.

    The company said the model supports configurable embedding outputs with varying dimensions and precision levels, allowing users to manage trade-offs between retrieval performance and storage requirements.

    “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors,” Mistral AI said in a statement.

    Codestral Embed is designed for use cases such as code completion, editing, or explanation tasks. It can also be applied in semantic search, duplicate detection, and repository-level analytics across large-scale codebases, the company said.“Codestral Embed supports unsupervised grouping of code based on functionality or structure,” Mistral AI added. “This is useful for analyzing repository composition, identifying emergent architecture patterns, or feeding into automated documentation and categorization systems.”

    The model is available through Mistral’s API under the name codestral-embed-2505, priced at per million tokens. A batch API version is offered at a 50 percent discount, and on-premise deployments are available through direct consultation with the company’s applied AI team.The launch follows Mistral’s recent introduction of the Agents API, which the company said complements its Chat Completion API and is intended to simplify the development of agent-based applications.

    Enterprise interest in embeddings

    Advanced code embedding models are gaining traction as key tools in enterprise software development, offering improvements in productivity, code quality, and risk management across the software lifecycle.

    “Models like Mistral’s Codestral Embed enable precise semantic code search and similarity detection, allowing enterprises to quickly identify reusable code and near-duplicates across large repositories,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “By facilitating rapid retrieval of relevant code snippets for bug fixes, feature enhancements, or onboarding, these embeddings significantly improve maintenance workflows.”

    However, despite promising early benchmarks, the long-term value of such models will depend on how well they perform in production environments.

    Factors such as ease of integration, scalability across enterprise systems, and consistency under real-world coding conditions will play a crucial role in determining their adoption.

    “Codestral Embed’s strong technical foundation and flexible deployment options make it a compelling solution for AI-driven software development, though its real-world impact will require validation beyond initial benchmark results,” Ram added.

    Further reading

    Vector Institute aims to clear up confusion about AI model performance

    When LLMs become influencers

    Researchers reveal flaws in AI agent benchmarking

    What misleading Meta Llama 4 benchmark scores show enterprise leaders about evaluating AI performance claims

    How CIOs navigate generative AI in the enterprise
    #mistral #launches #code #embedding #model
    Mistral AI launches code embedding model, claims edge over OpenAI and Cohere
    French startup Mistral AI on Wednesday unveiled Codestral Embed, its first code-specific embedding model, claiming it outperforms rival offerings from OpenAI, Cohere, and Voyage. The company said the model supports configurable embedding outputs with varying dimensions and precision levels, allowing users to manage trade-offs between retrieval performance and storage requirements. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors,” Mistral AI said in a statement. Codestral Embed is designed for use cases such as code completion, editing, or explanation tasks. It can also be applied in semantic search, duplicate detection, and repository-level analytics across large-scale codebases, the company said.“Codestral Embed supports unsupervised grouping of code based on functionality or structure,” Mistral AI added. “This is useful for analyzing repository composition, identifying emergent architecture patterns, or feeding into automated documentation and categorization systems.” The model is available through Mistral’s API under the name codestral-embed-2505, priced at per million tokens. A batch API version is offered at a 50 percent discount, and on-premise deployments are available through direct consultation with the company’s applied AI team.The launch follows Mistral’s recent introduction of the Agents API, which the company said complements its Chat Completion API and is intended to simplify the development of agent-based applications. Enterprise interest in embeddings Advanced code embedding models are gaining traction as key tools in enterprise software development, offering improvements in productivity, code quality, and risk management across the software lifecycle. “Models like Mistral’s Codestral Embed enable precise semantic code search and similarity detection, allowing enterprises to quickly identify reusable code and near-duplicates across large repositories,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “By facilitating rapid retrieval of relevant code snippets for bug fixes, feature enhancements, or onboarding, these embeddings significantly improve maintenance workflows.” However, despite promising early benchmarks, the long-term value of such models will depend on how well they perform in production environments. Factors such as ease of integration, scalability across enterprise systems, and consistency under real-world coding conditions will play a crucial role in determining their adoption. “Codestral Embed’s strong technical foundation and flexible deployment options make it a compelling solution for AI-driven software development, though its real-world impact will require validation beyond initial benchmark results,” Ram added. Further reading Vector Institute aims to clear up confusion about AI model performance When LLMs become influencers Researchers reveal flaws in AI agent benchmarking What misleading Meta Llama 4 benchmark scores show enterprise leaders about evaluating AI performance claims How CIOs navigate generative AI in the enterprise #mistral #launches #code #embedding #model
    WWW.COMPUTERWORLD.COM
    Mistral AI launches code embedding model, claims edge over OpenAI and Cohere
    French startup Mistral AI on Wednesday unveiled Codestral Embed, its first code-specific embedding model, claiming it outperforms rival offerings from OpenAI, Cohere, and Voyage. The company said the model supports configurable embedding outputs with varying dimensions and precision levels, allowing users to manage trade-offs between retrieval performance and storage requirements. “Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors,” Mistral AI said in a statement. Codestral Embed is designed for use cases such as code completion, editing, or explanation tasks. It can also be applied in semantic search, duplicate detection, and repository-level analytics across large-scale codebases, the company said.“Codestral Embed supports unsupervised grouping of code based on functionality or structure,” Mistral AI added. “This is useful for analyzing repository composition, identifying emergent architecture patterns, or feeding into automated documentation and categorization systems.” The model is available through Mistral’s API under the name codestral-embed-2505, priced at $0.15 per million tokens. A batch API version is offered at a 50 percent discount, and on-premise deployments are available through direct consultation with the company’s applied AI team.The launch follows Mistral’s recent introduction of the Agents API, which the company said complements its Chat Completion API and is intended to simplify the development of agent-based applications. Enterprise interest in embeddings Advanced code embedding models are gaining traction as key tools in enterprise software development, offering improvements in productivity, code quality, and risk management across the software lifecycle. “Models like Mistral’s Codestral Embed enable precise semantic code search and similarity detection, allowing enterprises to quickly identify reusable code and near-duplicates across large repositories,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “By facilitating rapid retrieval of relevant code snippets for bug fixes, feature enhancements, or onboarding, these embeddings significantly improve maintenance workflows.” However, despite promising early benchmarks, the long-term value of such models will depend on how well they perform in production environments. Factors such as ease of integration, scalability across enterprise systems, and consistency under real-world coding conditions will play a crucial role in determining their adoption. “Codestral Embed’s strong technical foundation and flexible deployment options make it a compelling solution for AI-driven software development, though its real-world impact will require validation beyond initial benchmark results,” Ram added. Further reading Vector Institute aims to clear up confusion about AI model performance When LLMs become influencers Researchers reveal flaws in AI agent benchmarking What misleading Meta Llama 4 benchmark scores show enterprise leaders about evaluating AI performance claims How CIOs navigate generative AI in the enterprise
    0 Yorumlar 0 hisse senetleri
  • What is Mistral AI? Everything to know about the OpenAI competitor

    Mistral AI, the French company behind AI assistant Le Chat and several foundational models, is officially regarded as one of France’s most promising tech startups and is arguably the only European company that could compete with OpenAI. But compared to its billion valuation, its global market share is still relatively low. 
    However, the recent launch of its chat assistant on mobile app stores was met with some hype, particularly in its home country. “Go and download Le Chat, which is made by Mistral, rather than ChatGPT by OpenAI — or something else,” French president Emmanuel Macron said in a TV interview ahead of the AI Action Summit in Paris.
    While this wave of attention may be encouraging, Mistral AI still faces challenges in competing with the likes of OpenAI — and in doing so while keeping up with its self-definition as “the world’s greenest and leading independent AI lab.”
    What is Mistral AI?
    Mistral AI has raised significant amounts of funding since its creation in 2023 with the ambition to “put frontier AI in the hands of everyone.” While this isn’t a direct jab at OpenAI, the slogan is meant to highlight the company’s advocacy for openness in AI.
    Its alternative to ChatGPT, chat assistant Le Chat, is now also available on iOS and Android. It reached 1 million downloads in the two weeks following its mobile release, even grabbing France’s top spot for free downloads on the iOS App Store.
    This comes in addition to Mistral AI’s suite of models, which includes: 

    Mistral Large 2, the primary large language model replacing Mistral Large.
    Pixtral Large, unveiled in 2024 as a new addition to the Pixtral family of multimodal models.
    Mistral Medium 3, released in May 2025 with the promise of providing efficiency without compromising performance, and best for coding and STEM tasks.
    Devstral, an AI model designed for coding and openly available under an Apache 2.0 license, meaning it can be used commercially without restriction.
    Codestral, an earlier generative AI model for code, but whose license banned commercial applications.
    “Les Ministraux,” a family of models optimized for edge devices such as phones.
    Mistral Saba, focused on Arabic language.

    In March 2025, the company introduced Mistral OCR, an optical character recognitionAPI that can turn any PDF into a text file to make it easier for AI models to ingest.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Who are Mistral AI’s founders?
    Mistral AI’s three founders share a background in AI research at major U.S. tech companies with significant operations in Paris. CEO Arthur Mensch used to work at Google’s DeepMind, while CTO Timothée Lacroix and chief scientist officer Guillaume Lample are former Meta staffers.
    Co-founding advisers also include Jean-Charles Samuelian-Werveand Charles Gorintin from health insurance startup Alan, as well as former digital minister Cédric O, which caused controversy due to his previous role.
    Are Mistral AI’s models open source?
    Not all of them. Mistral AI differentiates its premier models, whose weights are not available for commercial purposes, from its free models, for which it provides weight access under the Apache 2.0 license.
    Free models include research models such as Mistral NeMo, which was built in collaboration with Nvidia that the startup open-sourced in July 2024.
    How does Mistral AI make money?
    While many of Mistral AI’s offerings are free or now have free tiers, Mistral AI plans to drive some revenue from Le Chat’s paid tiers. Introduced in February 2025, Le Chat’s Pro plan is priced at a month.
    On the purely B2B side, Mistral AI monetizes its premier models through APIs with usage-based pricing. Enterprises can also license these models, and the company likely also generates a significant share of its revenue from its strategic partnerships, some of which it highlighted during the Paris AI Summit.
    Overall, however, Mistral AI’s revenue is reportedly still in the eight-digit range, according to multiple sources.
    What partnerships has Mistral AI closed?
    In 2024, Mistral AI entered a deal with Microsoft that included a strategic partnership for distributing its AI models through Microsoft’s Azure platform and a €15 million investment. The U.K.’s Competition and Markets Authorityswiftly concluded that the deal didn’t qualify for investigation due to its small size. However, it also sparked some criticism in the EU. 
    In January 2025, Mistral AI signed a deal with press agency Agence France-Presseto let Chat query the AFP’s entire text archive dating back to 1983.
    Mistral AI also secured strategic partnerships with France’s army and job agency, shipping giant CMA, German defense tech startup Helsing, IBM, Orange, and Stellantis.
    In May 2025, Mistral AI announced it would participate in the creation of an AI Campus in the Paris region, as part of a joint venture with UAE-investment firm MGX, NVIDIA, and France’s state-owned investment bank Bpifrance.
    How much funding has Mistral AI raised to date?
    As of February 2025, Mistral AI raised around €1 billion in capital to date, approximately billion at the current exchange rate. This includes some debt financing, as well as several equity financing rounds raised in close succession.
    In June 2023, and before it even released its first models, Mistral AI raised a record million seed round led by Lightspeed Venture Partners. Sources at the time said the seed round — Europe’s largest ever — valued the then-one-month-old startup at million. 
    Other investors in this seed round included Bpifrance, Eric Schmidt, Exor Ventures, First Minute Capital, Headline, JCDecaux Holding, La Famiglia, LocalGlobe, Motier Ventures, Rodolphe Saadé, Sofina, and Xavier Niel.
    Only six months later, it closed a Series A of €385 million, at a reported valuation of billion. The round was led by Andreessen Horowitz, with participation from existing backer Lightspeed, as well as BNP Paribas, CMA-CGM, Conviction, Elad Gil, General Catalyst, and Salesforce.
    The million convertible investment that Microsoft made in Mistral AI as part of their partnership announced in February 2024 was presented as a Series A extension, implying an unchanged valuation.
    In June 2024, Mistral AI then raised €600 million in a mix of equity and debt. The long-rumored round was led by General Catalyst at a billion valuation, with notable investors, including Cisco, IBM, Nvidia, Samsung Venture Investment Corporation, and others.
    What could a Mistral AI exit look like?
    Mistral is “not for sale,” Mensch said in January 2025 at the World Economic Forum in Davos. “Of course,the plan.” 
    This makes sense, given how much the startup has raised so far: Even a large sale may not provide high enough multiples for its investors, not to mention sovereignty concerns depending on the acquirer. 
    However, the only way to definitely squash persistent acquisition rumors is to scale its revenue to levels that could even remotely justify its nearly billion valuation. Either way, stay tuned.
    This story was originally published on February 28, 2025 and will be regularly updated.
    #what #mistral #everything #know #about
    What is Mistral AI? Everything to know about the OpenAI competitor
    Mistral AI, the French company behind AI assistant Le Chat and several foundational models, is officially regarded as one of France’s most promising tech startups and is arguably the only European company that could compete with OpenAI. But compared to its billion valuation, its global market share is still relatively low.  However, the recent launch of its chat assistant on mobile app stores was met with some hype, particularly in its home country. “Go and download Le Chat, which is made by Mistral, rather than ChatGPT by OpenAI — or something else,” French president Emmanuel Macron said in a TV interview ahead of the AI Action Summit in Paris. While this wave of attention may be encouraging, Mistral AI still faces challenges in competing with the likes of OpenAI — and in doing so while keeping up with its self-definition as “the world’s greenest and leading independent AI lab.” What is Mistral AI? Mistral AI has raised significant amounts of funding since its creation in 2023 with the ambition to “put frontier AI in the hands of everyone.” While this isn’t a direct jab at OpenAI, the slogan is meant to highlight the company’s advocacy for openness in AI. Its alternative to ChatGPT, chat assistant Le Chat, is now also available on iOS and Android. It reached 1 million downloads in the two weeks following its mobile release, even grabbing France’s top spot for free downloads on the iOS App Store. This comes in addition to Mistral AI’s suite of models, which includes:  Mistral Large 2, the primary large language model replacing Mistral Large. Pixtral Large, unveiled in 2024 as a new addition to the Pixtral family of multimodal models. Mistral Medium 3, released in May 2025 with the promise of providing efficiency without compromising performance, and best for coding and STEM tasks. Devstral, an AI model designed for coding and openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Codestral, an earlier generative AI model for code, but whose license banned commercial applications. “Les Ministraux,” a family of models optimized for edge devices such as phones. Mistral Saba, focused on Arabic language. In March 2025, the company introduced Mistral OCR, an optical character recognitionAPI that can turn any PDF into a text file to make it easier for AI models to ingest. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Who are Mistral AI’s founders? Mistral AI’s three founders share a background in AI research at major U.S. tech companies with significant operations in Paris. CEO Arthur Mensch used to work at Google’s DeepMind, while CTO Timothée Lacroix and chief scientist officer Guillaume Lample are former Meta staffers. Co-founding advisers also include Jean-Charles Samuelian-Werveand Charles Gorintin from health insurance startup Alan, as well as former digital minister Cédric O, which caused controversy due to his previous role. Are Mistral AI’s models open source? Not all of them. Mistral AI differentiates its premier models, whose weights are not available for commercial purposes, from its free models, for which it provides weight access under the Apache 2.0 license. Free models include research models such as Mistral NeMo, which was built in collaboration with Nvidia that the startup open-sourced in July 2024. How does Mistral AI make money? While many of Mistral AI’s offerings are free or now have free tiers, Mistral AI plans to drive some revenue from Le Chat’s paid tiers. Introduced in February 2025, Le Chat’s Pro plan is priced at a month. On the purely B2B side, Mistral AI monetizes its premier models through APIs with usage-based pricing. Enterprises can also license these models, and the company likely also generates a significant share of its revenue from its strategic partnerships, some of which it highlighted during the Paris AI Summit. Overall, however, Mistral AI’s revenue is reportedly still in the eight-digit range, according to multiple sources. What partnerships has Mistral AI closed? In 2024, Mistral AI entered a deal with Microsoft that included a strategic partnership for distributing its AI models through Microsoft’s Azure platform and a €15 million investment. The U.K.’s Competition and Markets Authorityswiftly concluded that the deal didn’t qualify for investigation due to its small size. However, it also sparked some criticism in the EU.  In January 2025, Mistral AI signed a deal with press agency Agence France-Presseto let Chat query the AFP’s entire text archive dating back to 1983. Mistral AI also secured strategic partnerships with France’s army and job agency, shipping giant CMA, German defense tech startup Helsing, IBM, Orange, and Stellantis. In May 2025, Mistral AI announced it would participate in the creation of an AI Campus in the Paris region, as part of a joint venture with UAE-investment firm MGX, NVIDIA, and France’s state-owned investment bank Bpifrance. How much funding has Mistral AI raised to date? As of February 2025, Mistral AI raised around €1 billion in capital to date, approximately billion at the current exchange rate. This includes some debt financing, as well as several equity financing rounds raised in close succession. In June 2023, and before it even released its first models, Mistral AI raised a record million seed round led by Lightspeed Venture Partners. Sources at the time said the seed round — Europe’s largest ever — valued the then-one-month-old startup at million.  Other investors in this seed round included Bpifrance, Eric Schmidt, Exor Ventures, First Minute Capital, Headline, JCDecaux Holding, La Famiglia, LocalGlobe, Motier Ventures, Rodolphe Saadé, Sofina, and Xavier Niel. Only six months later, it closed a Series A of €385 million, at a reported valuation of billion. The round was led by Andreessen Horowitz, with participation from existing backer Lightspeed, as well as BNP Paribas, CMA-CGM, Conviction, Elad Gil, General Catalyst, and Salesforce. The million convertible investment that Microsoft made in Mistral AI as part of their partnership announced in February 2024 was presented as a Series A extension, implying an unchanged valuation. In June 2024, Mistral AI then raised €600 million in a mix of equity and debt. The long-rumored round was led by General Catalyst at a billion valuation, with notable investors, including Cisco, IBM, Nvidia, Samsung Venture Investment Corporation, and others. What could a Mistral AI exit look like? Mistral is “not for sale,” Mensch said in January 2025 at the World Economic Forum in Davos. “Of course,the plan.”  This makes sense, given how much the startup has raised so far: Even a large sale may not provide high enough multiples for its investors, not to mention sovereignty concerns depending on the acquirer.  However, the only way to definitely squash persistent acquisition rumors is to scale its revenue to levels that could even remotely justify its nearly billion valuation. Either way, stay tuned. This story was originally published on February 28, 2025 and will be regularly updated. #what #mistral #everything #know #about
    TECHCRUNCH.COM
    What is Mistral AI? Everything to know about the OpenAI competitor
    Mistral AI, the French company behind AI assistant Le Chat and several foundational models, is officially regarded as one of France’s most promising tech startups and is arguably the only European company that could compete with OpenAI. But compared to its $6 billion valuation, its global market share is still relatively low.  However, the recent launch of its chat assistant on mobile app stores was met with some hype, particularly in its home country. “Go and download Le Chat, which is made by Mistral, rather than ChatGPT by OpenAI — or something else,” French president Emmanuel Macron said in a TV interview ahead of the AI Action Summit in Paris. While this wave of attention may be encouraging, Mistral AI still faces challenges in competing with the likes of OpenAI — and in doing so while keeping up with its self-definition as “the world’s greenest and leading independent AI lab.” What is Mistral AI? Mistral AI has raised significant amounts of funding since its creation in 2023 with the ambition to “put frontier AI in the hands of everyone.” While this isn’t a direct jab at OpenAI, the slogan is meant to highlight the company’s advocacy for openness in AI. Its alternative to ChatGPT, chat assistant Le Chat, is now also available on iOS and Android. It reached 1 million downloads in the two weeks following its mobile release, even grabbing France’s top spot for free downloads on the iOS App Store. This comes in addition to Mistral AI’s suite of models, which includes:  Mistral Large 2, the primary large language model replacing Mistral Large. Pixtral Large, unveiled in 2024 as a new addition to the Pixtral family of multimodal models. Mistral Medium 3, released in May 2025 with the promise of providing efficiency without compromising performance, and best for coding and STEM tasks. Devstral, an AI model designed for coding and openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Codestral, an earlier generative AI model for code, but whose license banned commercial applications. “Les Ministraux,” a family of models optimized for edge devices such as phones. Mistral Saba, focused on Arabic language. In March 2025, the company introduced Mistral OCR, an optical character recognition (OCR) API that can turn any PDF into a text file to make it easier for AI models to ingest. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Who are Mistral AI’s founders? Mistral AI’s three founders share a background in AI research at major U.S. tech companies with significant operations in Paris. CEO Arthur Mensch used to work at Google’s DeepMind, while CTO Timothée Lacroix and chief scientist officer Guillaume Lample are former Meta staffers. Co-founding advisers also include Jean-Charles Samuelian-Werve (also a board member) and Charles Gorintin from health insurance startup Alan, as well as former digital minister Cédric O, which caused controversy due to his previous role. Are Mistral AI’s models open source? Not all of them. Mistral AI differentiates its premier models, whose weights are not available for commercial purposes, from its free models, for which it provides weight access under the Apache 2.0 license. Free models include research models such as Mistral NeMo, which was built in collaboration with Nvidia that the startup open-sourced in July 2024. How does Mistral AI make money? While many of Mistral AI’s offerings are free or now have free tiers, Mistral AI plans to drive some revenue from Le Chat’s paid tiers. Introduced in February 2025, Le Chat’s Pro plan is priced at $14.99 a month. On the purely B2B side, Mistral AI monetizes its premier models through APIs with usage-based pricing. Enterprises can also license these models, and the company likely also generates a significant share of its revenue from its strategic partnerships, some of which it highlighted during the Paris AI Summit. Overall, however, Mistral AI’s revenue is reportedly still in the eight-digit range, according to multiple sources. What partnerships has Mistral AI closed? In 2024, Mistral AI entered a deal with Microsoft that included a strategic partnership for distributing its AI models through Microsoft’s Azure platform and a €15 million investment. The U.K.’s Competition and Markets Authority (CMA) swiftly concluded that the deal didn’t qualify for investigation due to its small size. However, it also sparked some criticism in the EU.  In January 2025, Mistral AI signed a deal with press agency Agence France-Presse (AFP) to let Chat query the AFP’s entire text archive dating back to 1983. Mistral AI also secured strategic partnerships with France’s army and job agency, shipping giant CMA, German defense tech startup Helsing, IBM, Orange, and Stellantis. In May 2025, Mistral AI announced it would participate in the creation of an AI Campus in the Paris region, as part of a joint venture with UAE-investment firm MGX, NVIDIA, and France’s state-owned investment bank Bpifrance. How much funding has Mistral AI raised to date? As of February 2025, Mistral AI raised around €1 billion in capital to date, approximately $1.04 billion at the current exchange rate. This includes some debt financing, as well as several equity financing rounds raised in close succession. In June 2023, and before it even released its first models, Mistral AI raised a record $112 million seed round led by Lightspeed Venture Partners. Sources at the time said the seed round — Europe’s largest ever — valued the then-one-month-old startup at $260 million.  Other investors in this seed round included Bpifrance, Eric Schmidt, Exor Ventures, First Minute Capital, Headline, JCDecaux Holding, La Famiglia, LocalGlobe, Motier Ventures, Rodolphe Saadé, Sofina, and Xavier Niel. Only six months later, it closed a Series A of €385 million ($415 million at the time), at a reported valuation of $2 billion. The round was led by Andreessen Horowitz (a16z), with participation from existing backer Lightspeed, as well as BNP Paribas, CMA-CGM, Conviction, Elad Gil, General Catalyst, and Salesforce. The $16.3 million convertible investment that Microsoft made in Mistral AI as part of their partnership announced in February 2024 was presented as a Series A extension, implying an unchanged valuation. In June 2024, Mistral AI then raised €600 million in a mix of equity and debt (around $640 million at the exchange rate at the time). The long-rumored round was led by General Catalyst at a $6 billion valuation, with notable investors, including Cisco, IBM, Nvidia, Samsung Venture Investment Corporation, and others. What could a Mistral AI exit look like? Mistral is “not for sale,” Mensch said in January 2025 at the World Economic Forum in Davos. “Of course, [an IPO is] the plan.”  This makes sense, given how much the startup has raised so far: Even a large sale may not provide high enough multiples for its investors, not to mention sovereignty concerns depending on the acquirer.  However, the only way to definitely squash persistent acquisition rumors is to scale its revenue to levels that could even remotely justify its nearly $6 billion valuation. Either way, stay tuned. This story was originally published on February 28, 2025 and will be regularly updated.
    0 Yorumlar 0 hisse senetleri
  • Mistral’s new Devstral AI model was designed for coding

    AI startup Mistral on Wednesday announced a new AI model focused on coding: Devstral.
    Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google’s Gemma 3 27B and Chinese AI lab DeepSeek’s V3 on SWE-Bench Verified, a benchmark measuring coding skills.
    “Devstral excels at using tools to explore codebases, editing multiple files and powersoftware engineering agents,” writes Mistral in a blog post provided to TechCrunch. “t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test casesDevstral is light enough to run on a singleRTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use.”
    Results from Mistral’s internal benchmarking evaluations of Devstral.Image Credits:Mistral
    Devstral arrives as AI coding assistants — and the models powering them — grow increasingly popular. Just last month, JetBrains, the company behind a range of popular app development tools, released its first “open” AI model for coding. In recent months, AI outfits including Google, Windsurf, and OpenAI have also unveiled models, both openly available and proprietary, optimized for programming tasks.
    AI models still struggle to code quality software — code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. One recent poll found that 76% of devs used or were planning to use AI tools in their development processes last year.
    Mistral previously waded into the assistive programming space with Codestral, a generative model for code. But Codestral wasn’t released under a license that permitted devs to use the model for commercial applications; its license explicitly banned “any internal usage by employees in the context ofcompany’s business activities.”
    Devstral, which Mistral is calling a “research preview,” can be downloaded from AI development platforms including Hugging Face and also tapped through Mistral’s API. It’s priced at per million input tokens and per million output tokens, tokens being the raw bits of data that AI models work with.Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Mistral says it’s “hard at work building a larger agentic coding model that will be available in the coming weeks.” Devstral isn’t a small model per se, but it’s on the smaller side at 24 billion parameters.Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services, including a chatbot platform, Le Chat, and mobile apps. It’s backed by VCs including General Catalyst, and has raised over €1.1 billionto date. Mistral’s customers include BNP Paribas, AXA, and Mirakl.
    Devstral is Mistral’s third product launch this month. A few weeks ago, Mistral launched Mistral Medium 3, an efficient general-purpose model. Around the same time, the company rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI “agent” builder and integrates Mistral’s models with third-party services like Gmail, Google Drive, and SharePoint.
    #mistrals #new #devstral #model #was
    Mistral’s new Devstral AI model was designed for coding
    AI startup Mistral on Wednesday announced a new AI model focused on coding: Devstral. Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google’s Gemma 3 27B and Chinese AI lab DeepSeek’s V3 on SWE-Bench Verified, a benchmark measuring coding skills. “Devstral excels at using tools to explore codebases, editing multiple files and powersoftware engineering agents,” writes Mistral in a blog post provided to TechCrunch. “t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test casesDevstral is light enough to run on a singleRTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use.” Results from Mistral’s internal benchmarking evaluations of Devstral.Image Credits:Mistral Devstral arrives as AI coding assistants — and the models powering them — grow increasingly popular. Just last month, JetBrains, the company behind a range of popular app development tools, released its first “open” AI model for coding. In recent months, AI outfits including Google, Windsurf, and OpenAI have also unveiled models, both openly available and proprietary, optimized for programming tasks. AI models still struggle to code quality software — code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. One recent poll found that 76% of devs used or were planning to use AI tools in their development processes last year. Mistral previously waded into the assistive programming space with Codestral, a generative model for code. But Codestral wasn’t released under a license that permitted devs to use the model for commercial applications; its license explicitly banned “any internal usage by employees in the context ofcompany’s business activities.” Devstral, which Mistral is calling a “research preview,” can be downloaded from AI development platforms including Hugging Face and also tapped through Mistral’s API. It’s priced at per million input tokens and per million output tokens, tokens being the raw bits of data that AI models work with.Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Mistral says it’s “hard at work building a larger agentic coding model that will be available in the coming weeks.” Devstral isn’t a small model per se, but it’s on the smaller side at 24 billion parameters.Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services, including a chatbot platform, Le Chat, and mobile apps. It’s backed by VCs including General Catalyst, and has raised over €1.1 billionto date. Mistral’s customers include BNP Paribas, AXA, and Mirakl. Devstral is Mistral’s third product launch this month. A few weeks ago, Mistral launched Mistral Medium 3, an efficient general-purpose model. Around the same time, the company rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI “agent” builder and integrates Mistral’s models with third-party services like Gmail, Google Drive, and SharePoint. #mistrals #new #devstral #model #was
    TECHCRUNCH.COM
    Mistral’s new Devstral AI model was designed for coding
    AI startup Mistral on Wednesday announced a new AI model focused on coding: Devstral. Devstral, which Mistral says was developed in partnership with AI company All Hands AI, is openly available under an Apache 2.0 license, meaning it can be used commercially without restriction. Mistral claims that Devstral outperforms other open models like Google’s Gemma 3 27B and Chinese AI lab DeepSeek’s V3 on SWE-Bench Verified, a benchmark measuring coding skills. “Devstral excels at using tools to explore codebases, editing multiple files and power[ing] software engineering agents,” writes Mistral in a blog post provided to TechCrunch. “[I]t runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test cases […] Devstral is light enough to run on a single [Nvidia] RTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use.” Results from Mistral’s internal benchmarking evaluations of Devstral.Image Credits:Mistral Devstral arrives as AI coding assistants — and the models powering them — grow increasingly popular. Just last month, JetBrains, the company behind a range of popular app development tools, released its first “open” AI model for coding. In recent months, AI outfits including Google, Windsurf, and OpenAI have also unveiled models, both openly available and proprietary, optimized for programming tasks. AI models still struggle to code quality software — code-generating AI tends to introduce security vulnerabilities and errors, owing to weaknesses in areas like the ability to understand programming logic. Yet their promise to boost coding productivity is pushing companies — and developers — to rapidly adopt them. One recent poll found that 76% of devs used or were planning to use AI tools in their development processes last year. Mistral previously waded into the assistive programming space with Codestral, a generative model for code. But Codestral wasn’t released under a license that permitted devs to use the model for commercial applications; its license explicitly banned “any internal usage by employees in the context of [a] company’s business activities.” Devstral, which Mistral is calling a “research preview,” can be downloaded from AI development platforms including Hugging Face and also tapped through Mistral’s API. It’s priced at $0.1 per million input tokens and $0.3 per million output tokens, tokens being the raw bits of data that AI models work with. (A million tokens is equivalent to about 750,000 words, or roughly 163,000 words longer than “War and Peace.”) Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Mistral says it’s “hard at work building a larger agentic coding model that will be available in the coming weeks.” Devstral isn’t a small model per se, but it’s on the smaller side at 24 billion parameters. (Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.) Mistral, founded in 2023, is a frontier model lab, aiming to build a range of AI-powered services, including a chatbot platform, Le Chat, and mobile apps. It’s backed by VCs including General Catalyst, and has raised over €1.1 billion (roughly $1.24 billion) to date. Mistral’s customers include BNP Paribas, AXA, and Mirakl. Devstral is Mistral’s third product launch this month. A few weeks ago, Mistral launched Mistral Medium 3, an efficient general-purpose model. Around the same time, the company rolled out Le Chat Enterprise, a corporate-focused chatbot service that offers tools like an AI “agent” builder and integrates Mistral’s models with third-party services like Gmail, Google Drive, and SharePoint.
    0 Yorumlar 0 hisse senetleri