Venture Beat
Venture Beat
Obsessed with covering transformative technology.
  • 223 χρήστες τους αρέσει
  • 196 Δημοσιεύσεις
  • 2 τις φωτογραφίες μου
  • 0 Videos
  • 0 Προεπισκόπηση
  • News
Αναζήτηση
Πρόσφατες ενημερώσεις
  • Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response

    Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawedRead More
    #reasoning #models #really #think #not
    Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response
    Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawedRead More #reasoning #models #really #think #not
    VENTUREBEAT.COM
    Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response
    Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawedRead More
    Like
    Love
    Wow
    Sad
    Angry
    478
    0 Σχόλια 0 Μοιράστηκε
  • Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions

    Should talking to an AI chatbot be protected and privileged information, like talking to a doctor or lawyer? A new court order raises the ideaRead More
    #sam #altman #calls #privilege #openai
    Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
    Should talking to an AI chatbot be protected and privileged information, like talking to a doctor or lawyer? A new court order raises the ideaRead More #sam #altman #calls #privilege #openai
    VENTUREBEAT.COM
    Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions
    Should talking to an AI chatbot be protected and privileged information, like talking to a doctor or lawyer? A new court order raises the ideaRead More
    Like
    Love
    Wow
    Sad
    Angry
    607
    0 Σχόλια 0 Μοιράστηκε
  • OpenAI hits 3M business users and launches workplace tools to take on Microsoft

    OpenAI reaches 3 million paying business users with 50% growth since February, launching new workplace AI tools including connectors and coding agents to compete with Microsoft.Read More
    #openai #hits #business #users #launches
    OpenAI hits 3M business users and launches workplace tools to take on Microsoft
    OpenAI reaches 3 million paying business users with 50% growth since February, launching new workplace AI tools including connectors and coding agents to compete with Microsoft.Read More #openai #hits #business #users #launches
    VENTUREBEAT.COM
    OpenAI hits 3M business users and launches workplace tools to take on Microsoft
    OpenAI reaches 3 million paying business users with 50% growth since February, launching new workplace AI tools including connectors and coding agents to compete with Microsoft.Read More
    Like
    Love
    Wow
    Angry
    Sad
    236
    0 Σχόλια 0 Μοιράστηκε
  • Stop guessing why your LLMs break: Anthropic’s new tool shows you exactly what goes wrong

    Anthropic's open-source circuit tracing tool can help developers debug, optimize, and control AI for reliable and trustable applications.Read More
    #stop #guessing #why #your #llms
    Stop guessing why your LLMs break: Anthropic’s new tool shows you exactly what goes wrong
    Anthropic's open-source circuit tracing tool can help developers debug, optimize, and control AI for reliable and trustable applications.Read More #stop #guessing #why #your #llms
    VENTUREBEAT.COM
    Stop guessing why your LLMs break: Anthropic’s new tool shows you exactly what goes wrong
    Anthropic's open-source circuit tracing tool can help developers debug, optimize, and control AI for reliable and trustable applications.Read More
    Like
    Love
    Wow
    Sad
    Angry
    375
    0 Σχόλια 0 Μοιράστηκε
  • Encharge AI unveils EN100 AI accelerator chip with analog memory

    EnCharge AI, a startup that raised million to date, announced the EnCharge EN100, an AI accelerator built on analog in-memory computing.Read More
    #encharge #unveils #en100 #accelerator #chip
    Encharge AI unveils EN100 AI accelerator chip with analog memory
    EnCharge AI, a startup that raised million to date, announced the EnCharge EN100, an AI accelerator built on analog in-memory computing.Read More #encharge #unveils #en100 #accelerator #chip
    VENTUREBEAT.COM
    Encharge AI unveils EN100 AI accelerator chip with analog memory
    EnCharge AI, a startup that raised $144 million to date, announced the EnCharge EN100, an AI accelerator built on analog in-memory computing.Read More
    0 Σχόλια 0 Μοιράστηκε
  • Which LLM should you use? Token Monster automatically combines multiple models and tools for you

    This architecture lets Token Monster tap into a range of models from different providers without having to build separate integrations for each one.Read More
    #which #llm #should #you #use
    Which LLM should you use? Token Monster automatically combines multiple models and tools for you
    This architecture lets Token Monster tap into a range of models from different providers without having to build separate integrations for each one.Read More #which #llm #should #you #use
    VENTUREBEAT.COM
    Which LLM should you use? Token Monster automatically combines multiple models and tools for you
    This architecture lets Token Monster tap into a range of models from different providers without having to build separate integrations for each one.Read More
    0 Σχόλια 0 Μοιράστηκε
  • Emotive voice AI startup Hume launches new EVI 3 model with rapid custom voice creation

    While EVI 3’s specific API pricing has not been announced yet, the pattern suggests it will be usage-based.Read More
    #emotive #voice #startup #hume #launches
    Emotive voice AI startup Hume launches new EVI 3 model with rapid custom voice creation
    While EVI 3’s specific API pricing has not been announced yet, the pattern suggests it will be usage-based.Read More #emotive #voice #startup #hume #launches
    VENTUREBEAT.COM
    Emotive voice AI startup Hume launches new EVI 3 model with rapid custom voice creation
    While EVI 3’s specific API pricing has not been announced yet (marked as TBA), the pattern suggests it will be usage-based.Read More
    0 Σχόλια 0 Μοιράστηκε
  • Ayzenberg Group aims to accelerate the growth of game companies with better marketing

    Azyenberg Group is launching a consulting business to accelerate the growth of game startups via marketing.Read More
    #ayzenberg #group #aims #accelerate #growth
    Ayzenberg Group aims to accelerate the growth of game companies with better marketing
    Azyenberg Group is launching a consulting business to accelerate the growth of game startups via marketing.Read More #ayzenberg #group #aims #accelerate #growth
    VENTUREBEAT.COM
    Ayzenberg Group aims to accelerate the growth of game companies with better marketing
    Azyenberg Group is launching a consulting business to accelerate the growth of game startups via marketing.Read More
    0 Σχόλια 0 Μοιράστηκε
  • FLUX.1 Kontext enables in-context image generation for enterprise AI pipelines

    FLUX.1 Kontext from Black Forest Labs aims to let users edit images multiple times through both text and reference images without losing speed.Read More
    #flux1 #kontext #enables #incontext #image
    FLUX.1 Kontext enables in-context image generation for enterprise AI pipelines
    FLUX.1 Kontext from Black Forest Labs aims to let users edit images multiple times through both text and reference images without losing speed.Read More #flux1 #kontext #enables #incontext #image
    VENTUREBEAT.COM
    FLUX.1 Kontext enables in-context image generation for enterprise AI pipelines
    FLUX.1 Kontext from Black Forest Labs aims to let users edit images multiple times through both text and reference images without losing speed.Read More
    0 Σχόλια 0 Μοιράστηκε
  • DeepSeek R1-0528 arrives in powerful open source challenge to OpenAI o3 and Google Gemini 2.5 Pro

    Additionally, the model’s hallucination rate has been reduced, contributing to more reliable and consistent output.Read More
    #deepseek #r10528 #arrives #powerful #open
    DeepSeek R1-0528 arrives in powerful open source challenge to OpenAI o3 and Google Gemini 2.5 Pro
    Additionally, the model’s hallucination rate has been reduced, contributing to more reliable and consistent output.Read More #deepseek #r10528 #arrives #powerful #open
    VENTUREBEAT.COM
    DeepSeek R1-0528 arrives in powerful open source challenge to OpenAI o3 and Google Gemini 2.5 Pro
    Additionally, the model’s hallucination rate has been reduced, contributing to more reliable and consistent output.Read More
    0 Σχόλια 0 Μοιράστηκε
  • Minecraft sales grew 35% on both mobile and console after release of film | Sensor Tower

    A Minecraft Movie has generated million in revenue at the worldwide box office, and it has also helped sales of the Minecraft game as well, Sensor Tower said.Read More
    #minecraft #sales #grew #both #mobile
    Minecraft sales grew 35% on both mobile and console after release of film | Sensor Tower
    A Minecraft Movie has generated million in revenue at the worldwide box office, and it has also helped sales of the Minecraft game as well, Sensor Tower said.Read More #minecraft #sales #grew #both #mobile
    VENTUREBEAT.COM
    Minecraft sales grew 35% on both mobile and console after release of film | Sensor Tower
    A Minecraft Movie has generated $941 million in revenue at the worldwide box office, and it has also helped sales of the Minecraft game as well, Sensor Tower said.Read More
    11 Σχόλια 0 Μοιράστηκε
  • What Salesforce’s $8B acquisition of Informatica means for enterprise data and AI

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    Salesforce is making a big bid to become a much larger player in the enterprise space, announcing today an B acquisition of Informatica. The move will bring together two large, established enterprise software providers with decades of real-world experience.
    Informatica was founded in 1993 as an enterprise data-focused vendor and an early pioneer in the ETLmarket. As technology cycles have changed over the last 25 years, so too has Informatica, moving to cloud and SaaS and more recently embracing generative AI. Just last week, at the company’s Informatica World, it announced a series of new agentic AI offerings designed to help improve enterprise data management and operations.
    By acquiring Informatica, Salesforce aims to enhance its trusted data foundation for deploying agentic AI. The combination will create a unified architecture enabling AI agents to operate safely, responsibly and at scale across enterprises by integrating:

    Informatica’s rich data catalog, integration, governance, quality, privacy and Master Data Managementcapabilities.
    Salesforce’s platform includes Data Cloud, Agentforce, Tableau, MuleSoft and Customer 360.

    “I’m excited to begin this new journey with Salesforce where the combination of Informatica’s rich data catalog, data integration, governance, quality and privacy, metadata management and Master Data Managementservices with the Salesforce platform upon close of the transaction will establish a unified architecture for agentic AI – enabling AI agents to operate safely, responsibly and at scale, across the modern enterprise,” Amit Walia, CEO of Informatica wrote in LinkedIn post.
    What another big deal means for Salesforce and its enterprise customers
    Salesforce has been no stranger to large acquisitions. 
    In 2021, Salesforce acquired Slack Technologies for a staggering billion. In 2019, Salesforce acquired data analytics platform Tableau for billion. A year earlier, in 2018, Salesforce acquired MuleSoft, bolstering its enterprise software integration capabilities. All of those acquisitions have worked out well for Salesforce, with Tableau, Slack and Mulesoft growing and expanding.
    According to Forrester Analyst Noel Yuhanna, Salesforce’s acquisition of Informatica fills a gap in its data management capabilities.
    “The acquisition markedly elevates Salesforce’s position across all critical dimensions of modern data management, including data integration, ingestion, pipelines, master data management, metadata management, transformation, preparation, quality and governance in the cloud,” Yuhanna told VentureBeat. “These capabilities are no longer optional—they are foundational for building an AI-ready enterprise, especially as the industry accelerates toward agentic AI.”
    To fully realize AI’s promise, Yuhanna said that vendor solutions must tightly integrate data and AI as two sides of the same coin. In his view, this acquisition strengthens Salesforce’s ability to do just that, laying the groundwork for next-generation data that can power intelligent, autonomous and personalized experiences at scale to support AI use cases.
    “Crucially, this positions Salesforce to deliver a unified customer data fabric, enabling a truly end-to-end platform for data, AI and analytics, tailored to customer-centric use cases,” Yuhanna said. “Real-time data integration across diverse sources is becoming critical for advanced customer engagement, and this move brings Salesforce much closer to that vision.”
    While data has long been the foundation of Informatica’s technology, its intersection with agentic AI makes it attractive to Salesforce.
    Hyoun Park, CEO and Chief Analyst at Amalgam Insights, told VentureBeat that the Informatica acquisition has been rumored for over a year, with a credible expectation last year of an billion bid.
    “From a practical perspective, the rush towards agentic AI requires any credible player in the space to manage data, workflows, integration and models as well as the agents and to be a strong IT partner,” Park said. “The Informatica acquisition goes hand-in-hand with Salesforce’s efforts towards improving IT management capabilities and going head to head against ServiceNow and IT specialists such as Boomi in the agent space.”
    Park noted that there is some overlap with the MuleSoft capabilities Salesforce already has in its portfolio. That said, he emphasized that Informatica’s capabilities in data management, including master data management, data catalog and data security, are both more updated and more comprehensive.
    The data components that make enterprise agentic AI real
    Data isn’t just about storing bits of content. When it comes to agentic AI it’s a whole lot more complex.
    “A successful agent strategy depends on the integration of three domains: models, applications and data,” Kevin Petrie, vice president of research at BARC told VentureBeat. “Salesforce gains significant strength in the data realm, especially metadata and cataloging, through this acquisition.”
    Petrie noted that Salesforce is already invested in the application realm through its CRMand Mulesoft offerings. Those capabilities are already being integrated into Salesforce’s agentic AI workflows, focusing on customer-related data.
    “However, Informatica provides extensive value in the data management realm outside agentic workflows and customer related data,” Petrie said. “To realize the full value of this acquisition, I believe Salesforce will need to give the Informatica unit sufficient autonomy to continue to provide and extend its broad data management capabilities to its existing customers.”
    What it all means for enterprise users
    So, what does the acquisition mean for both Salesforce and Informatica enterprise customers?
    Forrester’s Yuhanna sees the acquisition as a major advancement for Salesforce customers. He noted that Salesforce customers will be able to seamlessly access and leverage all types of customer data, whether housed within Salesforce or external systems, all in real time. It represents a unified customer data fabric that can deliver actionable insights across every channel and touchpoint. 
    “Critically, it accelerates Salesforce’s ability to deploy agentic AI, enabling low-code, low-maintenance AI solutions that reduce complexity and dramatically shorten time to value,” Yuhanna said. “With a fully integrated data management foundation, Salesforce customers can expect faster, more innovative, and more personalized customer experiences at scale.”
    The opportunity is equally appealing for Informatica customers. In Yuhanna’s view, this acquisition unlocks a faster path to agentic AI workloads, backed by the reach and power of the Salesforce ecosystem. As data management evolves, intelligent agents will automate core functions, turning traditionally time-consuming processes like data ingestion, integration, and pipeline orchestration into self-operating data workflows. Tasks that once took days or weeks will be executed with zero to little human intervention. 
    “With a unified data, AI, and analytics platform, Informatica customers will benefit from accelerated innovation, greater operational agility, and significantly enhanced returns on their data investments,” he said.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #what #salesforces #acquisition #informatica #means
    What Salesforce’s $8B acquisition of Informatica means for enterprise data and AI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Salesforce is making a big bid to become a much larger player in the enterprise space, announcing today an B acquisition of Informatica. The move will bring together two large, established enterprise software providers with decades of real-world experience. Informatica was founded in 1993 as an enterprise data-focused vendor and an early pioneer in the ETLmarket. As technology cycles have changed over the last 25 years, so too has Informatica, moving to cloud and SaaS and more recently embracing generative AI. Just last week, at the company’s Informatica World, it announced a series of new agentic AI offerings designed to help improve enterprise data management and operations. By acquiring Informatica, Salesforce aims to enhance its trusted data foundation for deploying agentic AI. The combination will create a unified architecture enabling AI agents to operate safely, responsibly and at scale across enterprises by integrating: Informatica’s rich data catalog, integration, governance, quality, privacy and Master Data Managementcapabilities. Salesforce’s platform includes Data Cloud, Agentforce, Tableau, MuleSoft and Customer 360. “I’m excited to begin this new journey with Salesforce where the combination of Informatica’s rich data catalog, data integration, governance, quality and privacy, metadata management and Master Data Managementservices with the Salesforce platform upon close of the transaction will establish a unified architecture for agentic AI – enabling AI agents to operate safely, responsibly and at scale, across the modern enterprise,” Amit Walia, CEO of Informatica wrote in LinkedIn post. What another big deal means for Salesforce and its enterprise customers Salesforce has been no stranger to large acquisitions.  In 2021, Salesforce acquired Slack Technologies for a staggering billion. In 2019, Salesforce acquired data analytics platform Tableau for billion. A year earlier, in 2018, Salesforce acquired MuleSoft, bolstering its enterprise software integration capabilities. All of those acquisitions have worked out well for Salesforce, with Tableau, Slack and Mulesoft growing and expanding. According to Forrester Analyst Noel Yuhanna, Salesforce’s acquisition of Informatica fills a gap in its data management capabilities. “The acquisition markedly elevates Salesforce’s position across all critical dimensions of modern data management, including data integration, ingestion, pipelines, master data management, metadata management, transformation, preparation, quality and governance in the cloud,” Yuhanna told VentureBeat. “These capabilities are no longer optional—they are foundational for building an AI-ready enterprise, especially as the industry accelerates toward agentic AI.” To fully realize AI’s promise, Yuhanna said that vendor solutions must tightly integrate data and AI as two sides of the same coin. In his view, this acquisition strengthens Salesforce’s ability to do just that, laying the groundwork for next-generation data that can power intelligent, autonomous and personalized experiences at scale to support AI use cases. “Crucially, this positions Salesforce to deliver a unified customer data fabric, enabling a truly end-to-end platform for data, AI and analytics, tailored to customer-centric use cases,” Yuhanna said. “Real-time data integration across diverse sources is becoming critical for advanced customer engagement, and this move brings Salesforce much closer to that vision.” While data has long been the foundation of Informatica’s technology, its intersection with agentic AI makes it attractive to Salesforce. Hyoun Park, CEO and Chief Analyst at Amalgam Insights, told VentureBeat that the Informatica acquisition has been rumored for over a year, with a credible expectation last year of an billion bid. “From a practical perspective, the rush towards agentic AI requires any credible player in the space to manage data, workflows, integration and models as well as the agents and to be a strong IT partner,” Park said. “The Informatica acquisition goes hand-in-hand with Salesforce’s efforts towards improving IT management capabilities and going head to head against ServiceNow and IT specialists such as Boomi in the agent space.” Park noted that there is some overlap with the MuleSoft capabilities Salesforce already has in its portfolio. That said, he emphasized that Informatica’s capabilities in data management, including master data management, data catalog and data security, are both more updated and more comprehensive. The data components that make enterprise agentic AI real Data isn’t just about storing bits of content. When it comes to agentic AI it’s a whole lot more complex. “A successful agent strategy depends on the integration of three domains: models, applications and data,” Kevin Petrie, vice president of research at BARC told VentureBeat. “Salesforce gains significant strength in the data realm, especially metadata and cataloging, through this acquisition.” Petrie noted that Salesforce is already invested in the application realm through its CRMand Mulesoft offerings. Those capabilities are already being integrated into Salesforce’s agentic AI workflows, focusing on customer-related data. “However, Informatica provides extensive value in the data management realm outside agentic workflows and customer related data,” Petrie said. “To realize the full value of this acquisition, I believe Salesforce will need to give the Informatica unit sufficient autonomy to continue to provide and extend its broad data management capabilities to its existing customers.” What it all means for enterprise users So, what does the acquisition mean for both Salesforce and Informatica enterprise customers? Forrester’s Yuhanna sees the acquisition as a major advancement for Salesforce customers. He noted that Salesforce customers will be able to seamlessly access and leverage all types of customer data, whether housed within Salesforce or external systems, all in real time. It represents a unified customer data fabric that can deliver actionable insights across every channel and touchpoint.  “Critically, it accelerates Salesforce’s ability to deploy agentic AI, enabling low-code, low-maintenance AI solutions that reduce complexity and dramatically shorten time to value,” Yuhanna said. “With a fully integrated data management foundation, Salesforce customers can expect faster, more innovative, and more personalized customer experiences at scale.” The opportunity is equally appealing for Informatica customers. In Yuhanna’s view, this acquisition unlocks a faster path to agentic AI workloads, backed by the reach and power of the Salesforce ecosystem. As data management evolves, intelligent agents will automate core functions, turning traditionally time-consuming processes like data ingestion, integration, and pipeline orchestration into self-operating data workflows. Tasks that once took days or weeks will be executed with zero to little human intervention.  “With a unified data, AI, and analytics platform, Informatica customers will benefit from accelerated innovation, greater operational agility, and significantly enhanced returns on their data investments,” he said. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #what #salesforces #acquisition #informatica #means
    VENTUREBEAT.COM
    What Salesforce’s $8B acquisition of Informatica means for enterprise data and AI
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Salesforce is making a big bid to become a much larger player in the enterprise space, announcing today an $8B acquisition of Informatica. The move will bring together two large, established enterprise software providers with decades of real-world experience. Informatica was founded in 1993 as an enterprise data-focused vendor and an early pioneer in the ETL (Extract, Transform, Load) market. As technology cycles have changed over the last 25 years, so too has Informatica, moving to cloud and SaaS and more recently embracing generative AI. Just last week, at the company’s Informatica World, it announced a series of new agentic AI offerings designed to help improve enterprise data management and operations. By acquiring Informatica, Salesforce aims to enhance its trusted data foundation for deploying agentic AI. The combination will create a unified architecture enabling AI agents to operate safely, responsibly and at scale across enterprises by integrating: Informatica’s rich data catalog, integration, governance, quality, privacy and Master Data Management (MDM) capabilities. Salesforce’s platform includes Data Cloud, Agentforce, Tableau, MuleSoft and Customer 360. “I’m excited to begin this new journey with Salesforce where the combination of Informatica’s rich data catalog, data integration, governance, quality and privacy, metadata management and Master Data Management (MDM) services with the Salesforce platform upon close of the transaction will establish a unified architecture for agentic AI – enabling AI agents to operate safely, responsibly and at scale, across the modern enterprise,” Amit Walia, CEO of Informatica wrote in LinkedIn post. What another big deal means for Salesforce and its enterprise customers Salesforce has been no stranger to large acquisitions.  In 2021, Salesforce acquired Slack Technologies for a staggering $28 billion. In 2019, Salesforce acquired data analytics platform Tableau for $16 billion. A year earlier, in 2018, Salesforce acquired MuleSoft, bolstering its enterprise software integration capabilities. All of those acquisitions have worked out well for Salesforce, with Tableau, Slack and Mulesoft growing and expanding. According to Forrester Analyst Noel Yuhanna, Salesforce’s acquisition of Informatica fills a gap in its data management capabilities. “The acquisition markedly elevates Salesforce’s position across all critical dimensions of modern data management, including data integration, ingestion, pipelines, master data management (MDM), metadata management, transformation, preparation, quality and governance in the cloud,” Yuhanna told VentureBeat. “These capabilities are no longer optional—they are foundational for building an AI-ready enterprise, especially as the industry accelerates toward agentic AI.” To fully realize AI’s promise, Yuhanna said that vendor solutions must tightly integrate data and AI as two sides of the same coin. In his view, this acquisition strengthens Salesforce’s ability to do just that, laying the groundwork for next-generation data that can power intelligent, autonomous and personalized experiences at scale to support AI use cases. “Crucially, this positions Salesforce to deliver a unified customer data fabric, enabling a truly end-to-end platform for data, AI and analytics, tailored to customer-centric use cases,” Yuhanna said. “Real-time data integration across diverse sources is becoming critical for advanced customer engagement, and this move brings Salesforce much closer to that vision.” While data has long been the foundation of Informatica’s technology, its intersection with agentic AI makes it attractive to Salesforce. Hyoun Park, CEO and Chief Analyst at Amalgam Insights, told VentureBeat that the Informatica acquisition has been rumored for over a year, with a credible expectation last year of an $11 billion bid. “From a practical perspective, the rush towards agentic AI requires any credible player in the space to manage data, workflows, integration and models as well as the agents and to be a strong IT partner,” Park said. “The Informatica acquisition goes hand-in-hand with Salesforce’s efforts towards improving IT management capabilities and going head to head against ServiceNow and IT specialists such as Boomi in the agent space.” Park noted that there is some overlap with the MuleSoft capabilities Salesforce already has in its portfolio. That said, he emphasized that Informatica’s capabilities in data management, including master data management, data catalog and data security, are both more updated and more comprehensive. The data components that make enterprise agentic AI real Data isn’t just about storing bits of content. When it comes to agentic AI it’s a whole lot more complex. “A successful agent strategy depends on the integration of three domains: models, applications and data,” Kevin Petrie, vice president of research at BARC told VentureBeat. “Salesforce gains significant strength in the data realm, especially metadata and cataloging, through this acquisition.” Petrie noted that Salesforce is already invested in the application realm through its CRM (Customer Relationship Management) and Mulesoft offerings. Those capabilities are already being integrated into Salesforce’s agentic AI workflows, focusing on customer-related data. “However, Informatica provides extensive value in the data management realm outside agentic workflows and customer related data,” Petrie said. “To realize the full value of this acquisition, I believe Salesforce will need to give the Informatica unit sufficient autonomy to continue to provide and extend its broad data management capabilities to its existing customers.” What it all means for enterprise users So, what does the acquisition mean for both Salesforce and Informatica enterprise customers? Forrester’s Yuhanna sees the acquisition as a major advancement for Salesforce customers. He noted that Salesforce customers will be able to seamlessly access and leverage all types of customer data, whether housed within Salesforce or external systems, all in real time. It represents a unified customer data fabric that can deliver actionable insights across every channel and touchpoint.  “Critically, it accelerates Salesforce’s ability to deploy agentic AI, enabling low-code, low-maintenance AI solutions that reduce complexity and dramatically shorten time to value,” Yuhanna said. “With a fully integrated data management foundation, Salesforce customers can expect faster, more innovative, and more personalized customer experiences at scale.” The opportunity is equally appealing for Informatica customers. In Yuhanna’s view, this acquisition unlocks a faster path to agentic AI workloads, backed by the reach and power of the Salesforce ecosystem. As data management evolves, intelligent agents will automate core functions, turning traditionally time-consuming processes like data ingestion, integration, and pipeline orchestration into self-operating data workflows. Tasks that once took days or weeks will be executed with zero to little human intervention.  “With a unified data, AI, and analytics platform, Informatica customers will benefit from accelerated innovation, greater operational agility, and significantly enhanced returns on their data investments,” he said. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    9 Σχόλια 0 Μοιράστηκε
  • Beyond single-model AI: How architectural design drives reliable multi-agent orchestration

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    We’re seeing AI evolve fast. It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens.
    But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start.
    The knotty problem of agent collaboration
    Why is orchestrating multi-agent systems such a challenge? Well, for starters:

    They’re independent: Unlike functions being called in a program, agents often have their own internal loops, goals and states. They don’t just wait patiently for instructions.
    Communication gets complicated: It’s not just Agent A talking to Agent B. Agent A might broadcast info Agent C and D care about, while Agent B is waiting for a signal from E before telling F something.
    They need to have a shared brain: How do they all agree on the “truth” of what’s happening? If Agent A updates a record, how does Agent B know about it reliably and quickly? Stale or conflicting information is a killer.
    Failure is inevitable: An agent crashes. A message gets lost. An external service call times out. When one part of the system falls over, you don’t want the whole thing grinding to a halt or, worse, doing the wrong thing.
    Consistency can be difficult: How do you ensure that a complex, multi-step process involving several agents actually reaches a valid final state? This isn’t easy when operations are distributed and asynchronous.

    Simply put, the combinatorial complexity explodes as you add more agents and interactions. Without a solid plan, debugging becomes a nightmare, and the system feels fragile.
    Picking your orchestration playbook
    How you decide agents coordinate their work is perhaps the most fundamental architectural choice. Here are a few frameworks:

    The conductor: This is like a traditional symphony orchestra. You have a main orchestratorthat dictates the flow, tells specific agentswhen to perform their piece, and brings it all together.

    This allows for: Clear workflows, execution that is easy to trace, straightforward control; it is simpler for smaller or less dynamic systems.
    Watch out for: The conductor can become a bottleneck or a single point of failure. This scenario is less flexible if you need agents to react dynamically or work without constant oversight.

    The jazz ensemble: Here, agents coordinate more directly with each other based on shared signals or rules, much like musicians in a jazz band improvising based on cues from each other and a common theme. There might be shared resources or event streams, but no central boss micro-managing every note.

    This allows for: Resilience, scalability, adaptability to changing conditions, more emergent behaviors.
    What to consider: It can be harder to understand the overall flow, debugging is trickyand ensuring global consistency requires careful design.

    Many real-world multi-agent systemsend up being a hybrid — perhaps a high-level orchestrator sets the stage; then groups of agents within that structure coordinate decentrally.
    For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough.
    Architectural patterns we lean on:

    The central library: A single, authoritative placewhere all shared information lives. Agents check books outand return them.

    Pro: Single source of truth, easier to enforce consistency.
    Con: Can get hammered with requests, potentially slowing things down or becoming a choke point. Must be seriously robust and scalable.

    Distributed notes: Agents keep local copies of frequently needed info for speed, backed by the central library.

    Pro: Faster reads.
    Con: How do you know if your copy is up-to-date? Cache invalidation and consistency become significant architectural puzzles.

    Shouting updates: Instead of agents constantly asking the library, the libraryshouts out “Hey, this piece of info changed!” via messages. Agents listen for updates they care about and update their own notes.

    Pro: Agents are decoupled, which is good for event-driven patterns.
    Con: Ensuring everyone gets the message and handles it correctly adds complexity. What if a message is lost?

    The right choice depends on how critical up-to-the-second consistency is, versus how much performance you need.
    Building for when stuff goes wrongIt’s not if an agent fails, it’s when. Your architecture needs to anticipate this.
    Think about:

    Watchdogs: This means having components whose job it is to simply watch other agents. If an agent goes quiet or starts acting weird, the watchdog can try restarting it or alerting the system.
    Try again, but be smart: If an agent’s action fails, it should often just try again. But, this only works if the action is idempotent. That means doing it five times has the exact same result as doing it once. If actions aren’t idempotent, retries can cause chaos.
    Cleaning up messes: If Agent A did something successfully, but Agent Bfailed, you might need to “undo” Agent A’s work. Patterns like Sagas help coordinate these multi-step, compensable workflows.
    Knowing where you were: Keeping a persistent log of the overall process helps. If the system goes down mid-workflow, it can pick up from the last known good step rather than starting over.
    Building firewalls: These patterns prevent a failure in one agent or service from overloading or crashing others, containing the damage.

    Making sure the job gets done rightEven with individual agent reliability, you need confidence that the entire collaborative task finishes correctly.
    Consider:

    Atomic-ish operations: While true ACID transactions are hard with distributed agents, you can design workflows to behave as close to atomically as possible using patterns like Sagas.
    The unchanging logbook: Record every significant action and state change as an event in an immutable log. This gives you a perfect history, makes state reconstruction easy, and is great for auditing and debugging.
    Agreeing on reality: For critical decisions, you might need agents to agree before proceeding. This can involve simple voting mechanisms or more complex distributed consensus algorithms if trust or coordination is particularly challenging.
    Checking the work: Build steps into your workflow to validate the output or state after an agent completes its task. If something looks wrong, trigger a reconciliation or correction process.

    The best architecture needs the right foundation.

    The post office: This is absolutely essential for decoupling agents. They send messages to the queue; agents interested in those messages pick them up. This enables asynchronous communication, handles traffic spikes and is key for resilient distributed systems.
    The shared filing cabinet: This is where your shared state lives. Choose the right typebased on your data structure and access patterns. This must be performant and highly available.
    The X-ray machine: Logs, metrics, tracing – you need these. Debugging distributed systems is notoriously hard. Being able to see exactly what every agent was doing, when and how they were interacting is non-negotiable.
    The directory: How do agents find each other or discover the services they need? A central registry helps manage this complexity.
    The playground: This is how you actually deploy, manage and scale all those individual agent instances reliably.

    How do agents chat?The way agents talk impacts everything from performance to how tightly coupled they are.

    Your standard phone call: This is simple, works everywhere and good for basic request/response. But it can feel a bit chatty and can be less efficient for high volume or complex data structures.
    The structured conference call: This uses efficient data formats, supports different call types including streaming and is type-safe. It is great for performance but requires defining service contracts.
    The bulletin board: Agents post messages to topics; other agents subscribe to topics they care about. This is asynchronous, highly scalable and completely decouples senders from receivers.
    Direct line: Agents call functions directly on other agents. This is fast, but creates very tight coupling — agent need to know exactly who they’re calling and where they are.

    Choose the protocol that fits the interaction pattern. Is it a direct request? A broadcast event? A stream of data?
    Putting it all together
    Building reliable, scalable multi-agent systems isn’t about finding a magic bullet; it’s about making smart architectural choices based on your specific needs. Will you lean more hierarchical for control or federated for resilience? How will you manage that crucial shared state? What’s your plan for whenan agent goes down? What infrastructure pieces are non-negotiable?
    It’s complex, yes, but by focusing on these architectural blueprints — orchestrating interactions, managing shared knowledge, planning for failure, ensuring consistency and building on a solid infrastructure foundation — you can tame the complexity and build the robust, intelligent systems that will drive the next wave of enterprise AI.
    Nikhil Gupta is the AI product management leader/staff product manager at Atlassian.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #beyond #singlemodel #how #architectural #design
    Beyond single-model AI: How architectural design drives reliable multi-agent orchestration
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More We’re seeing AI evolve fast. It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. The knotty problem of agent collaboration Why is orchestrating multi-agent systems such a challenge? Well, for starters: They’re independent: Unlike functions being called in a program, agents often have their own internal loops, goals and states. They don’t just wait patiently for instructions. Communication gets complicated: It’s not just Agent A talking to Agent B. Agent A might broadcast info Agent C and D care about, while Agent B is waiting for a signal from E before telling F something. They need to have a shared brain: How do they all agree on the “truth” of what’s happening? If Agent A updates a record, how does Agent B know about it reliably and quickly? Stale or conflicting information is a killer. Failure is inevitable: An agent crashes. A message gets lost. An external service call times out. When one part of the system falls over, you don’t want the whole thing grinding to a halt or, worse, doing the wrong thing. Consistency can be difficult: How do you ensure that a complex, multi-step process involving several agents actually reaches a valid final state? This isn’t easy when operations are distributed and asynchronous. Simply put, the combinatorial complexity explodes as you add more agents and interactions. Without a solid plan, debugging becomes a nightmare, and the system feels fragile. Picking your orchestration playbook How you decide agents coordinate their work is perhaps the most fundamental architectural choice. Here are a few frameworks: The conductor: This is like a traditional symphony orchestra. You have a main orchestratorthat dictates the flow, tells specific agentswhen to perform their piece, and brings it all together. This allows for: Clear workflows, execution that is easy to trace, straightforward control; it is simpler for smaller or less dynamic systems. Watch out for: The conductor can become a bottleneck or a single point of failure. This scenario is less flexible if you need agents to react dynamically or work without constant oversight. The jazz ensemble: Here, agents coordinate more directly with each other based on shared signals or rules, much like musicians in a jazz band improvising based on cues from each other and a common theme. There might be shared resources or event streams, but no central boss micro-managing every note. This allows for: Resilience, scalability, adaptability to changing conditions, more emergent behaviors. What to consider: It can be harder to understand the overall flow, debugging is trickyand ensuring global consistency requires careful design. Many real-world multi-agent systemsend up being a hybrid — perhaps a high-level orchestrator sets the stage; then groups of agents within that structure coordinate decentrally. For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. Architectural patterns we lean on: The central library: A single, authoritative placewhere all shared information lives. Agents check books outand return them. Pro: Single source of truth, easier to enforce consistency. Con: Can get hammered with requests, potentially slowing things down or becoming a choke point. Must be seriously robust and scalable. Distributed notes: Agents keep local copies of frequently needed info for speed, backed by the central library. Pro: Faster reads. Con: How do you know if your copy is up-to-date? Cache invalidation and consistency become significant architectural puzzles. Shouting updates: Instead of agents constantly asking the library, the libraryshouts out “Hey, this piece of info changed!” via messages. Agents listen for updates they care about and update their own notes. Pro: Agents are decoupled, which is good for event-driven patterns. Con: Ensuring everyone gets the message and handles it correctly adds complexity. What if a message is lost? The right choice depends on how critical up-to-the-second consistency is, versus how much performance you need. Building for when stuff goes wrongIt’s not if an agent fails, it’s when. Your architecture needs to anticipate this. Think about: Watchdogs: This means having components whose job it is to simply watch other agents. If an agent goes quiet or starts acting weird, the watchdog can try restarting it or alerting the system. Try again, but be smart: If an agent’s action fails, it should often just try again. But, this only works if the action is idempotent. That means doing it five times has the exact same result as doing it once. If actions aren’t idempotent, retries can cause chaos. Cleaning up messes: If Agent A did something successfully, but Agent Bfailed, you might need to “undo” Agent A’s work. Patterns like Sagas help coordinate these multi-step, compensable workflows. Knowing where you were: Keeping a persistent log of the overall process helps. If the system goes down mid-workflow, it can pick up from the last known good step rather than starting over. Building firewalls: These patterns prevent a failure in one agent or service from overloading or crashing others, containing the damage. Making sure the job gets done rightEven with individual agent reliability, you need confidence that the entire collaborative task finishes correctly. Consider: Atomic-ish operations: While true ACID transactions are hard with distributed agents, you can design workflows to behave as close to atomically as possible using patterns like Sagas. The unchanging logbook: Record every significant action and state change as an event in an immutable log. This gives you a perfect history, makes state reconstruction easy, and is great for auditing and debugging. Agreeing on reality: For critical decisions, you might need agents to agree before proceeding. This can involve simple voting mechanisms or more complex distributed consensus algorithms if trust or coordination is particularly challenging. Checking the work: Build steps into your workflow to validate the output or state after an agent completes its task. If something looks wrong, trigger a reconciliation or correction process. The best architecture needs the right foundation. The post office: This is absolutely essential for decoupling agents. They send messages to the queue; agents interested in those messages pick them up. This enables asynchronous communication, handles traffic spikes and is key for resilient distributed systems. The shared filing cabinet: This is where your shared state lives. Choose the right typebased on your data structure and access patterns. This must be performant and highly available. The X-ray machine: Logs, metrics, tracing – you need these. Debugging distributed systems is notoriously hard. Being able to see exactly what every agent was doing, when and how they were interacting is non-negotiable. The directory: How do agents find each other or discover the services they need? A central registry helps manage this complexity. The playground: This is how you actually deploy, manage and scale all those individual agent instances reliably. How do agents chat?The way agents talk impacts everything from performance to how tightly coupled they are. Your standard phone call: This is simple, works everywhere and good for basic request/response. But it can feel a bit chatty and can be less efficient for high volume or complex data structures. The structured conference call: This uses efficient data formats, supports different call types including streaming and is type-safe. It is great for performance but requires defining service contracts. The bulletin board: Agents post messages to topics; other agents subscribe to topics they care about. This is asynchronous, highly scalable and completely decouples senders from receivers. Direct line: Agents call functions directly on other agents. This is fast, but creates very tight coupling — agent need to know exactly who they’re calling and where they are. Choose the protocol that fits the interaction pattern. Is it a direct request? A broadcast event? A stream of data? Putting it all together Building reliable, scalable multi-agent systems isn’t about finding a magic bullet; it’s about making smart architectural choices based on your specific needs. Will you lean more hierarchical for control or federated for resilience? How will you manage that crucial shared state? What’s your plan for whenan agent goes down? What infrastructure pieces are non-negotiable? It’s complex, yes, but by focusing on these architectural blueprints — orchestrating interactions, managing shared knowledge, planning for failure, ensuring consistency and building on a solid infrastructure foundation — you can tame the complexity and build the robust, intelligent systems that will drive the next wave of enterprise AI. Nikhil Gupta is the AI product management leader/staff product manager at Atlassian. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #beyond #singlemodel #how #architectural #design
    VENTUREBEAT.COM
    Beyond single-model AI: How architectural design drives reliable multi-agent orchestration
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More We’re seeing AI evolve fast. It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. The knotty problem of agent collaboration Why is orchestrating multi-agent systems such a challenge? Well, for starters: They’re independent: Unlike functions being called in a program, agents often have their own internal loops, goals and states. They don’t just wait patiently for instructions. Communication gets complicated: It’s not just Agent A talking to Agent B. Agent A might broadcast info Agent C and D care about, while Agent B is waiting for a signal from E before telling F something. They need to have a shared brain (state): How do they all agree on the “truth” of what’s happening? If Agent A updates a record, how does Agent B know about it reliably and quickly? Stale or conflicting information is a killer. Failure is inevitable: An agent crashes. A message gets lost. An external service call times out. When one part of the system falls over, you don’t want the whole thing grinding to a halt or, worse, doing the wrong thing. Consistency can be difficult: How do you ensure that a complex, multi-step process involving several agents actually reaches a valid final state? This isn’t easy when operations are distributed and asynchronous. Simply put, the combinatorial complexity explodes as you add more agents and interactions. Without a solid plan, debugging becomes a nightmare, and the system feels fragile. Picking your orchestration playbook How you decide agents coordinate their work is perhaps the most fundamental architectural choice. Here are a few frameworks: The conductor (hierarchical): This is like a traditional symphony orchestra. You have a main orchestrator (the conductor) that dictates the flow, tells specific agents (musicians) when to perform their piece, and brings it all together. This allows for: Clear workflows, execution that is easy to trace, straightforward control; it is simpler for smaller or less dynamic systems. Watch out for: The conductor can become a bottleneck or a single point of failure. This scenario is less flexible if you need agents to react dynamically or work without constant oversight. The jazz ensemble (federated/decentralized): Here, agents coordinate more directly with each other based on shared signals or rules, much like musicians in a jazz band improvising based on cues from each other and a common theme. There might be shared resources or event streams, but no central boss micro-managing every note. This allows for: Resilience (if one musician stops, the others can often continue), scalability, adaptability to changing conditions, more emergent behaviors. What to consider: It can be harder to understand the overall flow, debugging is tricky (“Why did that agent do that then?”) and ensuring global consistency requires careful design. Many real-world multi-agent systems (MAS) end up being a hybrid — perhaps a high-level orchestrator sets the stage; then groups of agents within that structure coordinate decentrally. For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. Architectural patterns we lean on: The central library (centralized knowledge base): A single, authoritative place (like a database or a dedicated knowledge service) where all shared information lives. Agents check books out (read) and return them (write). Pro: Single source of truth, easier to enforce consistency. Con: Can get hammered with requests, potentially slowing things down or becoming a choke point. Must be seriously robust and scalable. Distributed notes (distributed cache): Agents keep local copies of frequently needed info for speed, backed by the central library. Pro: Faster reads. Con: How do you know if your copy is up-to-date? Cache invalidation and consistency become significant architectural puzzles. Shouting updates (message passing): Instead of agents constantly asking the library, the library (or other agents) shouts out “Hey, this piece of info changed!” via messages. Agents listen for updates they care about and update their own notes. Pro: Agents are decoupled, which is good for event-driven patterns. Con: Ensuring everyone gets the message and handles it correctly adds complexity. What if a message is lost? The right choice depends on how critical up-to-the-second consistency is, versus how much performance you need. Building for when stuff goes wrong (error handling and recovery) It’s not if an agent fails, it’s when. Your architecture needs to anticipate this. Think about: Watchdogs (supervision): This means having components whose job it is to simply watch other agents. If an agent goes quiet or starts acting weird, the watchdog can try restarting it or alerting the system. Try again, but be smart (retries and idempotency): If an agent’s action fails, it should often just try again. But, this only works if the action is idempotent. That means doing it five times has the exact same result as doing it once (like setting a value, not incrementing it). If actions aren’t idempotent, retries can cause chaos. Cleaning up messes (compensation): If Agent A did something successfully, but Agent B (a later step in the process) failed, you might need to “undo” Agent A’s work. Patterns like Sagas help coordinate these multi-step, compensable workflows. Knowing where you were (workflow state): Keeping a persistent log of the overall process helps. If the system goes down mid-workflow, it can pick up from the last known good step rather than starting over. Building firewalls (circuit breakers and bulkheads): These patterns prevent a failure in one agent or service from overloading or crashing others, containing the damage. Making sure the job gets done right (consistent task execution) Even with individual agent reliability, you need confidence that the entire collaborative task finishes correctly. Consider: Atomic-ish operations: While true ACID transactions are hard with distributed agents, you can design workflows to behave as close to atomically as possible using patterns like Sagas. The unchanging logbook (event sourcing): Record every significant action and state change as an event in an immutable log. This gives you a perfect history, makes state reconstruction easy, and is great for auditing and debugging. Agreeing on reality (consensus): For critical decisions, you might need agents to agree before proceeding. This can involve simple voting mechanisms or more complex distributed consensus algorithms if trust or coordination is particularly challenging. Checking the work (validation): Build steps into your workflow to validate the output or state after an agent completes its task. If something looks wrong, trigger a reconciliation or correction process. The best architecture needs the right foundation. The post office (message queues/brokers like Kafka or RabbitMQ): This is absolutely essential for decoupling agents. They send messages to the queue; agents interested in those messages pick them up. This enables asynchronous communication, handles traffic spikes and is key for resilient distributed systems. The shared filing cabinet (knowledge stores/databases): This is where your shared state lives. Choose the right type (relational, NoSQL, graph) based on your data structure and access patterns. This must be performant and highly available. The X-ray machine (observability platforms): Logs, metrics, tracing – you need these. Debugging distributed systems is notoriously hard. Being able to see exactly what every agent was doing, when and how they were interacting is non-negotiable. The directory (agent registry): How do agents find each other or discover the services they need? A central registry helps manage this complexity. The playground (containerization and orchestration like Kubernetes): This is how you actually deploy, manage and scale all those individual agent instances reliably. How do agents chat? (Communication protocol choices) The way agents talk impacts everything from performance to how tightly coupled they are. Your standard phone call (REST/HTTP): This is simple, works everywhere and good for basic request/response. But it can feel a bit chatty and can be less efficient for high volume or complex data structures. The structured conference call (gRPC): This uses efficient data formats, supports different call types including streaming and is type-safe. It is great for performance but requires defining service contracts. The bulletin board (message queues — protocols like AMQP, MQTT): Agents post messages to topics; other agents subscribe to topics they care about. This is asynchronous, highly scalable and completely decouples senders from receivers. Direct line (RPC — less common): Agents call functions directly on other agents. This is fast, but creates very tight coupling — agent need to know exactly who they’re calling and where they are. Choose the protocol that fits the interaction pattern. Is it a direct request? A broadcast event? A stream of data? Putting it all together Building reliable, scalable multi-agent systems isn’t about finding a magic bullet; it’s about making smart architectural choices based on your specific needs. Will you lean more hierarchical for control or federated for resilience? How will you manage that crucial shared state? What’s your plan for when (not if) an agent goes down? What infrastructure pieces are non-negotiable? It’s complex, yes, but by focusing on these architectural blueprints — orchestrating interactions, managing shared knowledge, planning for failure, ensuring consistency and building on a solid infrastructure foundation — you can tame the complexity and build the robust, intelligent systems that will drive the next wave of enterprise AI. Nikhil Gupta is the AI product management leader/staff product manager at Atlassian. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Σχόλια 0 Μοιράστηκε
  • The battle to AI-enable the web: NLweb and what enterprises need to know

    Microsoft's NLWeb protocol transforms websites into AI-powered apps with conversational interfaces.Read More
    #battle #aienable #web #nlweb #what
    The battle to AI-enable the web: NLweb and what enterprises need to know
    Microsoft's NLWeb protocol transforms websites into AI-powered apps with conversational interfaces.Read More #battle #aienable #web #nlweb #what
    VENTUREBEAT.COM
    The battle to AI-enable the web: NLweb and what enterprises need to know
    Microsoft's NLWeb protocol transforms websites into AI-powered apps with conversational interfaces.Read More
    0 Σχόλια 0 Μοιράστηκε
  • OpenAI updates Operator to o3, making its $200 monthly ChatGPT Pro subscription more enticing

    Operator remains a research preview and is accessible only to ChatGPT Pro users. The Responses API version will continue to use GPT-4o.Read More
    #openai #updates #operator #making #its
    OpenAI updates Operator to o3, making its $200 monthly ChatGPT Pro subscription more enticing
    Operator remains a research preview and is accessible only to ChatGPT Pro users. The Responses API version will continue to use GPT-4o.Read More #openai #updates #operator #making #its
    VENTUREBEAT.COM
    OpenAI updates Operator to o3, making its $200 monthly ChatGPT Pro subscription more enticing
    Operator remains a research preview and is accessible only to ChatGPT Pro users. The Responses API version will continue to use GPT-4o.Read More
    0 Σχόλια 0 Μοιράστηκε
  • The 3 biggest bombshells from this week’s AI extravaganza

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    Basketball has March Madness. Tech has the Consumer Electronics Show. AI has been waiting for its big moment—and this week may finally be it.
    With Microsoft’s Build and Google’s I/O developer conferences happening back-to-back, it was already primed to be a big week. Microsoft announced 50 new AI tools alone, and Google followed up with its own slate just a day later. Then, out of the blue, Anthropic dogpiled with Claude 4, the latest version of its large language model, on Thursday.
    While the maelstrom of announcements included some gee-whiz trinkets, anyone looking to build a business with AI should find plenty to look forward to and even some new tools to use immediately.
    Struggling to keep up? Here are the biggest announcements from every company, and how they’ll reshape the AI landscape in the coming months.
    Microsoft wants AI agents to talk to one another
    By giving AI the power to perform work like a human rather than simply talking like one, agents represent an obvious next step for LLMs. But there’s been one major caveat holding them back: They can’t easily interact with one another. An agentic AI that books plane tickets for business travel and another that books hotels sounds great, until you land in London with a hotel room in Madrid.
    Microsoft took a major step in resolving this impasse by adopting Model Context Protocol, a standard way for different agents – even those using different LLMs – to communicate. Anthropic actually created the standard in Nov. 2024. Still, Microsoft’s adoption means it’s well on its way to becoming a fixture of future agentic architecture, as HTML was for the open web. Microsoft also added MCP to Azure AI Foundry, its tool for creating AI apps, so users can begin building agents that interact with one another immediately.
    So what? Agentic AI remains in its infancy, but a widely adopted standard will pave the way for the next generation of agentic tools. Standardization between competitors means you’ll have your pick of the litter on future LLMs when automating processes, rather than getting locked into a single company’s ecosystem.
    Claude 4 makes coders swoon
    With just a 3.3% share of the generative AI market, ChatGPT and Gemini often overshadow Claude. However, developers won’t want to sleep on Opus 4 and Sonnet 4, which arrived unexpectedly on Thursday with some major coding bragging rights.
    Perhaps most impressively, Claude 4 boasts marathon runtimes of up to seven hours in its “extended thinking” mode, which allows it to take thousands of steps and use tools like web search. Anthropic claims it will also explore more approaches, catch more errors, and break down its reasoning for more complex problems.
    With these improvements, Claude Opus 4 shot to the top of the popular SWE-bench software engineering benchmark with a score of 72.5%, besting both OpenAI o3and Gemini 2.5 Pro.
    So what? While benchmarks don’t always tell the whole story, Claude has already earned a reputation as the LLM of choice for developers. Claude 4 further cements that reputation with improvements for the software engineering community, which will help differentiate it from its more general-purpose peers.
    Google AI Mode upends search
    Google debuted plenty of consumer AI at IO 2025, from the aforementioned virtual try-ons to Google Beam, which turns 2D video streams into live, hologram-like models with the help of six different camera angles and a lot of AI. However, the most consequential change for enterprises may well be AI Mode for search.
    Like AI Overviews before it, AI Mode integrates Gemini into the search experience much more thoroughly. When you activate an AI Mode search, Google executes a “query fan-out technique,” which breaks your query into multiple searches and executes them simultaneously, then stitches together the results. While this mode was previously available for Google Labs users, this week, it’s going mainstream.
    So what? Even if you’re not using AI Mode personally, any change to Google Search sends ripples into the pond for the millions of businesses that depend on Google Search to draw eyeballs. AI Overviews upended the search engine optimizationindustry, and AI Mode may be even more dramatic. The way most people find information online is changing, and fast.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #biggest #bombshells #this #weeks #extravaganza
    The 3 biggest bombshells from this week’s AI extravaganza
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Basketball has March Madness. Tech has the Consumer Electronics Show. AI has been waiting for its big moment—and this week may finally be it. With Microsoft’s Build and Google’s I/O developer conferences happening back-to-back, it was already primed to be a big week. Microsoft announced 50 new AI tools alone, and Google followed up with its own slate just a day later. Then, out of the blue, Anthropic dogpiled with Claude 4, the latest version of its large language model, on Thursday. While the maelstrom of announcements included some gee-whiz trinkets, anyone looking to build a business with AI should find plenty to look forward to and even some new tools to use immediately. Struggling to keep up? Here are the biggest announcements from every company, and how they’ll reshape the AI landscape in the coming months. Microsoft wants AI agents to talk to one another By giving AI the power to perform work like a human rather than simply talking like one, agents represent an obvious next step for LLMs. But there’s been one major caveat holding them back: They can’t easily interact with one another. An agentic AI that books plane tickets for business travel and another that books hotels sounds great, until you land in London with a hotel room in Madrid. Microsoft took a major step in resolving this impasse by adopting Model Context Protocol, a standard way for different agents – even those using different LLMs – to communicate. Anthropic actually created the standard in Nov. 2024. Still, Microsoft’s adoption means it’s well on its way to becoming a fixture of future agentic architecture, as HTML was for the open web. Microsoft also added MCP to Azure AI Foundry, its tool for creating AI apps, so users can begin building agents that interact with one another immediately. So what? Agentic AI remains in its infancy, but a widely adopted standard will pave the way for the next generation of agentic tools. Standardization between competitors means you’ll have your pick of the litter on future LLMs when automating processes, rather than getting locked into a single company’s ecosystem. Claude 4 makes coders swoon With just a 3.3% share of the generative AI market, ChatGPT and Gemini often overshadow Claude. However, developers won’t want to sleep on Opus 4 and Sonnet 4, which arrived unexpectedly on Thursday with some major coding bragging rights. Perhaps most impressively, Claude 4 boasts marathon runtimes of up to seven hours in its “extended thinking” mode, which allows it to take thousands of steps and use tools like web search. Anthropic claims it will also explore more approaches, catch more errors, and break down its reasoning for more complex problems. With these improvements, Claude Opus 4 shot to the top of the popular SWE-bench software engineering benchmark with a score of 72.5%, besting both OpenAI o3and Gemini 2.5 Pro. So what? While benchmarks don’t always tell the whole story, Claude has already earned a reputation as the LLM of choice for developers. Claude 4 further cements that reputation with improvements for the software engineering community, which will help differentiate it from its more general-purpose peers. Google AI Mode upends search Google debuted plenty of consumer AI at IO 2025, from the aforementioned virtual try-ons to Google Beam, which turns 2D video streams into live, hologram-like models with the help of six different camera angles and a lot of AI. However, the most consequential change for enterprises may well be AI Mode for search. Like AI Overviews before it, AI Mode integrates Gemini into the search experience much more thoroughly. When you activate an AI Mode search, Google executes a “query fan-out technique,” which breaks your query into multiple searches and executes them simultaneously, then stitches together the results. While this mode was previously available for Google Labs users, this week, it’s going mainstream. So what? Even if you’re not using AI Mode personally, any change to Google Search sends ripples into the pond for the millions of businesses that depend on Google Search to draw eyeballs. AI Overviews upended the search engine optimizationindustry, and AI Mode may be even more dramatic. The way most people find information online is changing, and fast. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #biggest #bombshells #this #weeks #extravaganza
    VENTUREBEAT.COM
    The 3 biggest bombshells from this week’s AI extravaganza
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Basketball has March Madness. Tech has the Consumer Electronics Show. AI has been waiting for its big moment—and this week may finally be it. With Microsoft’s Build and Google’s I/O developer conferences happening back-to-back, it was already primed to be a big week. Microsoft announced 50 new AI tools alone, and Google followed up with its own slate just a day later. Then, out of the blue, Anthropic dogpiled with Claude 4, the latest version of its large language model (LLM), on Thursday. While the maelstrom of announcements included some gee-whiz trinkets (we’re looking at you, Google Virtual Try-On), anyone looking to build a business with AI should find plenty to look forward to and even some new tools to use immediately. Struggling to keep up? Here are the biggest announcements from every company, and how they’ll reshape the AI landscape in the coming months. Microsoft wants AI agents to talk to one another By giving AI the power to perform work like a human rather than simply talking like one, agents represent an obvious next step for LLMs. But there’s been one major caveat holding them back: They can’t easily interact with one another. An agentic AI that books plane tickets for business travel and another that books hotels sounds great, until you land in London with a hotel room in Madrid. Microsoft took a major step in resolving this impasse by adopting Model Context Protocol (MCP), a standard way for different agents – even those using different LLMs – to communicate. Anthropic actually created the standard in Nov. 2024. Still, Microsoft’s adoption means it’s well on its way to becoming a fixture of future agentic architecture, as HTML was for the open web. Microsoft also added MCP to Azure AI Foundry, its tool for creating AI apps, so users can begin building agents that interact with one another immediately. So what? Agentic AI remains in its infancy, but a widely adopted standard will pave the way for the next generation of agentic tools. Standardization between competitors means you’ll have your pick of the litter on future LLMs when automating processes, rather than getting locked into a single company’s ecosystem. Claude 4 makes coders swoon With just a 3.3% share of the generative AI market, ChatGPT and Gemini often overshadow Claude. However, developers won’t want to sleep on Opus 4 and Sonnet 4, which arrived unexpectedly on Thursday with some major coding bragging rights. Perhaps most impressively, Claude 4 boasts marathon runtimes of up to seven hours in its “extended thinking” mode, which allows it to take thousands of steps and use tools like web search. Anthropic claims it will also explore more approaches, catch more errors, and break down its reasoning for more complex problems. With these improvements, Claude Opus 4 shot to the top of the popular SWE-bench software engineering benchmark with a score of 72.5%, besting both OpenAI o3 (69.1%) and Gemini 2.5 Pro (63.2%). So what? While benchmarks don’t always tell the whole story, Claude has already earned a reputation as the LLM of choice for developers. Claude 4 further cements that reputation with improvements for the software engineering community, which will help differentiate it from its more general-purpose peers. Google AI Mode upends search Google debuted plenty of consumer AI at IO 2025, from the aforementioned virtual try-ons to Google Beam, which turns 2D video streams into live, hologram-like models with the help of six different camera angles and a lot of AI. However, the most consequential change for enterprises may well be AI Mode for search. Like AI Overviews before it, AI Mode integrates Gemini into the search experience much more thoroughly. When you activate an AI Mode search, Google executes a “query fan-out technique,” which breaks your query into multiple searches and executes them simultaneously, then stitches together the results. While this mode was previously available for Google Labs users, this week, it’s going mainstream. So what? Even if you’re not using AI Mode personally, any change to Google Search sends ripples into the pond for the millions of businesses that depend on Google Search to draw eyeballs. AI Overviews upended the search engine optimization (SEO) industry, and AI Mode may be even more dramatic. The way most people find information online is changing, and fast. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Σχόλια 0 Μοιράστηκε
  • Call of Duty sees boost on Twitch thanks to Verdansk map | StreamElements

    The streaming space saw a relatively quiet April, according to StreamElement’s latest State of the Stream report. Twitch saw fewer hours watched than in March, which the report attributes at least in part to a lack of “major” game releases — games like South of Midnight and Oblivion Remastered not being enough to move the needle on viewers at least. The most notable thing about the month is the precipitous spike in viewership for Call of Duty: Warzone, which saw 45 million hours watched in April, a 146% increase.
    According to StreamElement’s report, which was done in partnership with Rainmaker.gg, Twitch saw a total of 1.573 billion hours watched in April, less than even February despite have two more days. For daily hours watched, Rainmaker’s data shows viewers watched about 52 million hours total per day, lower than any other month in April.

    Or Perry, StreamElements CEO, attributed the viewership spike to the re-launch of the much-beloved Verdansk map. “The highlight of April was Call of Duty: Warzone experiencing a whopping 146% surge in viewership on Twitch, driven by the return of an updated version of the fan favorite Verdansk map. Nostalgia is a powerful tool because it makes things feel instantly familiar, with Warzone’s latest release standing out by reestablishing that emotional connection while upgrading rather than simply recycling old content.”
    The rest of the report covers the list of the top 10 streamers on Twitch. Caedrel remains at the top of the list, thanks to his League of Legends commentary streams — he covered the LCK Spring Split in April, and also received the EMEA Masters Trophy his team, Los Ratones, won at the winter event. It’s his third month

    GB Daily
    Stay in the know! Get the latest news in your inbox daily
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #call #duty #sees #boost #twitch
    Call of Duty sees boost on Twitch thanks to Verdansk map | StreamElements
    The streaming space saw a relatively quiet April, according to StreamElement’s latest State of the Stream report. Twitch saw fewer hours watched than in March, which the report attributes at least in part to a lack of “major” game releases — games like South of Midnight and Oblivion Remastered not being enough to move the needle on viewers at least. The most notable thing about the month is the precipitous spike in viewership for Call of Duty: Warzone, which saw 45 million hours watched in April, a 146% increase. According to StreamElement’s report, which was done in partnership with Rainmaker.gg, Twitch saw a total of 1.573 billion hours watched in April, less than even February despite have two more days. For daily hours watched, Rainmaker’s data shows viewers watched about 52 million hours total per day, lower than any other month in April. Or Perry, StreamElements CEO, attributed the viewership spike to the re-launch of the much-beloved Verdansk map. “The highlight of April was Call of Duty: Warzone experiencing a whopping 146% surge in viewership on Twitch, driven by the return of an updated version of the fan favorite Verdansk map. Nostalgia is a powerful tool because it makes things feel instantly familiar, with Warzone’s latest release standing out by reestablishing that emotional connection while upgrading rather than simply recycling old content.” The rest of the report covers the list of the top 10 streamers on Twitch. Caedrel remains at the top of the list, thanks to his League of Legends commentary streams — he covered the LCK Spring Split in April, and also received the EMEA Masters Trophy his team, Los Ratones, won at the winter event. It’s his third month GB Daily Stay in the know! Get the latest news in your inbox daily Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #call #duty #sees #boost #twitch
    VENTUREBEAT.COM
    Call of Duty sees boost on Twitch thanks to Verdansk map | StreamElements
    The streaming space saw a relatively quiet April, according to StreamElement’s latest State of the Stream report. Twitch saw fewer hours watched than in March, which the report attributes at least in part to a lack of “major” game releases — games like South of Midnight and Oblivion Remastered not being enough to move the needle on viewers at least. The most notable thing about the month is the precipitous spike in viewership for Call of Duty: Warzone, which saw 45 million hours watched in April, a 146% increase. According to StreamElement’s report, which was done in partnership with Rainmaker.gg, Twitch saw a total of 1.573 billion hours watched in April, less than even February despite have two more days. For daily hours watched, Rainmaker’s data shows viewers watched about 52 million hours total per day, lower than any other month in April. Or Perry, StreamElements CEO, attributed the viewership spike to the re-launch of the much-beloved Verdansk map. “The highlight of April was Call of Duty: Warzone experiencing a whopping 146% surge in viewership on Twitch, driven by the return of an updated version of the fan favorite Verdansk map. Nostalgia is a powerful tool because it makes things feel instantly familiar, with Warzone’s latest release standing out by reestablishing that emotional connection while upgrading rather than simply recycling old content.” The rest of the report covers the list of the top 10 streamers on Twitch. Caedrel remains at the top of the list, thanks to his League of Legends commentary streams — he covered the LCK Spring Split in April, and also received the EMEA Masters Trophy his team, Los Ratones, won at the winter event. It’s his third month GB Daily Stay in the know! Get the latest news in your inbox daily Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Σχόλια 0 Μοιράστηκε
  • Omeda Studios announces Predecessor esports summer tournaments | The DeanBeat

    Omeda Studios announced a full competitive summer calendar for their free-to-play third-person action MOBA Predecessor.Read More
    #omeda #studios #announces #predecessor #esports
    Omeda Studios announces Predecessor esports summer tournaments | The DeanBeat
    Omeda Studios announced a full competitive summer calendar for their free-to-play third-person action MOBA Predecessor.Read More #omeda #studios #announces #predecessor #esports
    VENTUREBEAT.COM
    Omeda Studios announces Predecessor esports summer tournaments | The DeanBeat
    Omeda Studios announced a full competitive summer calendar for their free-to-play third-person action MOBA Predecessor.Read More
    0 Σχόλια 0 Μοιράστηκε
  • How Saudi Arabia and Savvy’s long-term push into gaming is proceeding | Jesse Meschuk interview

    Savvy Games Group has made a lot of news as it has built the newest financial empire in games with acquisitions of companies.Read More
    #how #saudi #arabia #savvys #longterm
    How Saudi Arabia and Savvy’s long-term push into gaming is proceeding | Jesse Meschuk interview
    Savvy Games Group has made a lot of news as it has built the newest financial empire in games with acquisitions of companies.Read More #how #saudi #arabia #savvys #longterm
    VENTUREBEAT.COM
    How Saudi Arabia and Savvy’s long-term push into gaming is proceeding | Jesse Meschuk interview
    Savvy Games Group has made a lot of news as it has built the newest financial empire in games with acquisitions of companies.Read More
    0 Σχόλια 0 Μοιράστηκε
  • Why enterprise RAG systems fail: Google study introduces ‘sufficient context’ solution

    Google's "sufficient context" helps refine RAG systems, reduce LLM hallucinations, and boost AI reliability for business applications.Read More
    #why #enterprise #rag #systems #fail
    Why enterprise RAG systems fail: Google study introduces ‘sufficient context’ solution
    Google's "sufficient context" helps refine RAG systems, reduce LLM hallucinations, and boost AI reliability for business applications.Read More #why #enterprise #rag #systems #fail
    VENTUREBEAT.COM
    Why enterprise RAG systems fail: Google study introduces ‘sufficient context’ solution
    Google's "sufficient context" helps refine RAG systems, reduce LLM hallucinations, and boost AI reliability for business applications.Read More
    0 Σχόλια 0 Μοιράστηκε
και άλλες ιστορίες