AI News
AI News
AI News provides news, analysis and opinion on the latest artificial intelligence breakthroughs. Key
  • 200 oameni carora le place asta
  • 96 Postari
  • 2 Fotografii
  • 0 Video
  • News
Căutare
Recent Actualizat
  • The AI execution gap: Why 80% of projects don’t reach production

    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.
    #execution #gap #why #projects #dont
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle. #execution #gap #why #projects #dont
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to $631 billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least $1 million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.(Image source: Unsplash)
    Like
    Love
    Wow
    Angry
    Sad
    598
    0 Commentarii 0 Distribuiri
  • MedTech AI, hardware, and clinical application programmes

    Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management.Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between billion and billion annually in productivity gains. Through GenAI adoption, an additional billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity.  While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place.While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmesMeasuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible.Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience.Bridging user experience with technical precision – design awardsInnovation is no longer solely judged by technical performance with user experiencebeing equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency.  Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #medtech #hardware #clinical #application #programmes
    MedTech AI, hardware, and clinical application programmes
    Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management.Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between billion and billion annually in productivity gains. Through GenAI adoption, an additional billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity.  While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place.While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmesMeasuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible.Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience.Bridging user experience with technical precision – design awardsInnovation is no longer solely judged by technical performance with user experiencebeing equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency.  Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #medtech #hardware #clinical #application #programmes
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    MedTech AI, hardware, and clinical application programmes
    Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management.Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between $14 billion and $55 billion annually in productivity gains. Through GenAI adoption, an additional $50 billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity.  While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place.While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmesMeasuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible.Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience.Bridging user experience with technical precision – design awardsInnovation is no longer solely judged by technical performance with user experience (UX) being equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency.  Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. (Image source: “IBM Achieves New Deep Learning Breakthrough” by IBM Research is licensed under CC BY-ND 2.0.)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • NVIDIA helps Germany lead Europe’s AI manufacturing race

    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #nvidia #helps #germany #lead #europes
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #nvidia #helps #germany #lead #europes
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory” (as they’re calling it) will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.(Photo by Maheshkumar Painam)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Anthropic launches Claude AI models for US national security

    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments.

    The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments.

    Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio.

    Specialised AI capabilities for national security

    The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments.

    Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis.

    However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI.

    Balancing innovation with regulation

    In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled.

    Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively.

    Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry.

    He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary.

    Implications of AI in national security

    The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations.

    Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology.

    The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development.

    Regulatory landscape

    As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure.

    Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action.

    This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard.

    As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate.

    For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security.See also: Reddit sues Anthropic over AI data scraping

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    The post Anthropic launches Claude AI models for US national security appeared first on AI News.
    #anthropic #launches #claude #models #national
    Anthropic launches Claude AI models for US national security
    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security.See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News. #anthropic #launches #claude #models #national
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Anthropic launches Claude AI models for US national security
    Anthropic has unveiled a custom collection of Claude AI models designed for US national security customers. The announcement represents a potential milestone in the application of AI within classified government environments. The ‘Claude Gov’ models have already been deployed by agencies operating at the highest levels of US national security, with access strictly limited to those working within such classified environments. Anthropic says these Claude Gov models emerged from extensive collaboration with government customers to address real-world operational requirements. Despite being tailored for national security applications, Anthropic maintains that these models underwent the same rigorous safety testing as other Claude models in their portfolio. Specialised AI capabilities for national security The specialised models deliver improved performance across several critical areas for government operations. They feature enhanced handling of classified materials, with fewer instances where the AI refuses to engage with sensitive information—a common frustration in secure environments. Additional improvements include better comprehension of documents within intelligence and defence contexts, enhanced proficiency in languages crucial to national security operations, and superior interpretation of complex cybersecurity data for intelligence analysis. However, this announcement arrives amid ongoing debates about AI regulation in the US. Anthropic CEO Dario Amodei recently expressed concerns about proposed legislation that would grant a decade-long freeze on state regulation of AI. Balancing innovation with regulation In a guest essay published in The New York Times this week, Amodei advocated for transparency rules rather than regulatory moratoriums. He detailed internal evaluations revealing concerning behaviours in advanced AI models, including an instance where Anthropic’s newest model threatened to expose a user’s private emails unless a shutdown plan was cancelled. Amodei compared AI safety testing to wind tunnel trials for aircraft designed to expose defects before public release, emphasising that safety teams must detect and block risks proactively. Anthropic has positioned itself as an advocate for responsible AI development. Under its Responsible Scaling Policy, the company already shares details about testing methods, risk-mitigation steps, and release criteria—practices Amodei believes should become standard across the industry. He suggests that formalising similar practices industry-wide would enable both the public and legislators to monitor capability improvements and determine whether additional regulatory action becomes necessary. Implications of AI in national security The deployment of advanced models within national security contexts raises important questions about the role of AI in intelligence gathering, strategic planning, and defence operations. Amodei has expressed support for export controls on advanced chips and the military adoption of trusted systems to counter rivals like China, indicating Anthropic’s awareness of the geopolitical implications of AI technology. The Claude Gov models could potentially serve numerous applications for national security, from strategic planning and operational support to intelligence analysis and threat assessment—all within the framework of Anthropic’s stated commitment to responsible AI development. Regulatory landscape As Anthropic rolls out these specialised models for government use, the broader regulatory environment for AI remains in flux. The Senate is currently considering language that would institute a moratorium on state-level AI regulation, with hearings planned before voting on the broader technology measure. Amodei has suggested that states could adopt narrow disclosure rules that defer to a future federal framework, with a supremacy clause eventually preempting state measures to preserve uniformity without halting near-term local action. This approach would allow for some immediate regulatory protection while working toward a comprehensive national standard. As these technologies become more deeply integrated into national security operations, questions of safety, oversight, and appropriate use will remain at the forefront of both policy discussions and public debate. For Anthropic, the challenge will be maintaining its commitment to responsible AI development while meeting the specialised needs of government customers for crtitical applications such as national security. (Image credit: Anthropic) See also: Reddit sues Anthropic over AI data scraping Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Anthropic launches Claude AI models for US national security appeared first on AI News.
    Like
    Love
    Wow
    Sad
    Angry
    732
    0 Commentarii 0 Distribuiri
  • Reddit sues Anthropic over AI data scraping

    Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it.Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own commercial products without a written deal. Reddit claims Anthropic’s bots have been doing exactly that for years, scraping massive amounts of conversations and posts to train and improve Claude.What makes this lawsuit particularly spicy is the way it goes after Anthropic’s reputation. Anthropic has worked hard to brand itself as the ethical, trustworthy AI company, the “white knight” of the industry. The lawsuit, however, calls these claims nothing more than “empty marketing gimmicks”.For instance, Reddit points to a statement from July 2024 where Anthropic claimed it had stopped its bots from crawling Reddit. The lawsuit says this was “false”, alleging that its logs caught Anthropic’s bots trying to access the site more than one hundred thousand times in the following months.But this isn’t just about corporate squabbles; it directly involves user privacy. When you delete a post or a comment on Reddit, you expect it to be gone. Reddit has official licensing deals with other big AI players like Google and OpenAI, and these deals include technical measures to ensure that when a user deletes content, the AI company does too.According to Reddit’s lawsuit, Anthropic has no such deal and has refused to enter one. This means if their AI was trained on a post you later deleted, that content could still be baked into Claude’s knowledge base, effectively ignoring your choice to remove it. The lawsuit even includes a screenshot where Claude itself admits it has no real way of knowing if the Reddit data it was trained on was later deleted by a user:So, what does Reddit want? It’s not just about money, although they are asking for damages for things like increased server costs and lost licensing fees. They are asking the court for an injunction to force Anthropic to stop using any Reddit data immediately.Furthermore, Reddit wants to prohibit Anthropic from selling or licensing any product that was built using that data. That means they’re asking a judge to effectively take Claude off the market.This case forces a tough question: Does being “publicly available” on the internet mean content is free for any corporation to take and monetise? Reddit is arguing a firm “no,” and the outcome could change the rules for how AI is developed from here on out.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #reddit #sues #anthropic #over #data
    Reddit sues Anthropic over AI data scraping
    Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it.Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own commercial products without a written deal. Reddit claims Anthropic’s bots have been doing exactly that for years, scraping massive amounts of conversations and posts to train and improve Claude.What makes this lawsuit particularly spicy is the way it goes after Anthropic’s reputation. Anthropic has worked hard to brand itself as the ethical, trustworthy AI company, the “white knight” of the industry. The lawsuit, however, calls these claims nothing more than “empty marketing gimmicks”.For instance, Reddit points to a statement from July 2024 where Anthropic claimed it had stopped its bots from crawling Reddit. The lawsuit says this was “false”, alleging that its logs caught Anthropic’s bots trying to access the site more than one hundred thousand times in the following months.But this isn’t just about corporate squabbles; it directly involves user privacy. When you delete a post or a comment on Reddit, you expect it to be gone. Reddit has official licensing deals with other big AI players like Google and OpenAI, and these deals include technical measures to ensure that when a user deletes content, the AI company does too.According to Reddit’s lawsuit, Anthropic has no such deal and has refused to enter one. This means if their AI was trained on a post you later deleted, that content could still be baked into Claude’s knowledge base, effectively ignoring your choice to remove it. The lawsuit even includes a screenshot where Claude itself admits it has no real way of knowing if the Reddit data it was trained on was later deleted by a user:So, what does Reddit want? It’s not just about money, although they are asking for damages for things like increased server costs and lost licensing fees. They are asking the court for an injunction to force Anthropic to stop using any Reddit data immediately.Furthermore, Reddit wants to prohibit Anthropic from selling or licensing any product that was built using that data. That means they’re asking a judge to effectively take Claude off the market.This case forces a tough question: Does being “publicly available” on the internet mean content is free for any corporation to take and monetise? Reddit is arguing a firm “no,” and the outcome could change the rules for how AI is developed from here on out.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #reddit #sues #anthropic #over #data
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Reddit sues Anthropic over AI data scraping
    Reddit is accusing Anthropic of building its Claude AI models on the back of Reddit’s users, without permission and without paying for it.Anyone who uses Reddit, even a web-crawling bot, agrees to the site’s user agreement. That agreement is clear: you cannot just take content from the site and use it for your own commercial products without a written deal. Reddit claims Anthropic’s bots have been doing exactly that for years, scraping massive amounts of conversations and posts to train and improve Claude.What makes this lawsuit particularly spicy is the way it goes after Anthropic’s reputation. Anthropic has worked hard to brand itself as the ethical, trustworthy AI company, the “white knight” of the industry. The lawsuit, however, calls these claims nothing more than “empty marketing gimmicks”.For instance, Reddit points to a statement from July 2024 where Anthropic claimed it had stopped its bots from crawling Reddit. The lawsuit says this was “false”, alleging that its logs caught Anthropic’s bots trying to access the site more than one hundred thousand times in the following months.But this isn’t just about corporate squabbles; it directly involves user privacy. When you delete a post or a comment on Reddit, you expect it to be gone. Reddit has official licensing deals with other big AI players like Google and OpenAI, and these deals include technical measures to ensure that when a user deletes content, the AI company does too.According to Reddit’s lawsuit, Anthropic has no such deal and has refused to enter one. This means if their AI was trained on a post you later deleted, that content could still be baked into Claude’s knowledge base, effectively ignoring your choice to remove it. The lawsuit even includes a screenshot where Claude itself admits it has no real way of knowing if the Reddit data it was trained on was later deleted by a user:So, what does Reddit want? It’s not just about money, although they are asking for damages for things like increased server costs and lost licensing fees. They are asking the court for an injunction to force Anthropic to stop using any Reddit data immediately.Furthermore, Reddit wants to prohibit Anthropic from selling or licensing any product that was built using that data. That means they’re asking a judge to effectively take Claude off the market.This case forces a tough question: Does being “publicly available” on the internet mean content is free for any corporation to take and monetise? Reddit is arguing a firm “no,” and the outcome could change the rules for how AI is developed from here on out.(Photo by Brett Jordan)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    Like
    Love
    Wow
    Angry
    Sad
    365
    0 Commentarii 0 Distribuiri
  • AI enables shift from enablement to strategic leadership

    CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge.Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’, is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet.But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues.We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center.Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments.“It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwCFrom support to strategy: shifting expectations for AIRani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation.For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs.Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples.But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk.“It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.”Cleaning house: the data challenge behind AIRani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines.“In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.”Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data.It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge.If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.”For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation.ConclusionIt’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential.But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business.As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys.Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America.
    #enables #shift #enablement #strategic #leadership
    AI enables shift from enablement to strategic leadership
    CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge.Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’, is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet.But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues.We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center.Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments.“It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwCFrom support to strategy: shifting expectations for AIRani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation.For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs.Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples.But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk.“It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.”Cleaning house: the data challenge behind AIRani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines.“In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.”Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data.It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge.If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.”For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation.ConclusionIt’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential.But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business.As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys.Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America. #enables #shift #enablement #strategic #leadership
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    AI enables shift from enablement to strategic leadership
    CIOs and business leaders know they’re sitting on a goldmine of business data. And while traditional tools such as business intelligence platforms and statistical analysis software can effectively surface insights from the collated data resources, doing so quickly, in real-time and at scale remains an unsolved challenge.Enterprise AI, when deployed responsibly and at scale, can turn these bottlenecks into opportunities. Acting quickly on data, even ‘live’ (during a customer interaction, for example), is one of the technology’s abilities, as is scalability: AI can process large amounts of information from disparate sources almost as easily as it can summarize a one-page spreadsheet.But deploying an AI solution in the modern enterprise isn’t simple. It takes structure, trust and the right talent. Along with the practical implementation challenges, using AI brings its own challenges, such as data governance, the need to impose guardrails on AI responses and training data, and persistent staffing issues.We met with Rani Radhakrishnan, PwC Principal, Technology Managed Services – AI, Data Analytics and Insights, to talk candidly about what’s working — and what’s holding back CIOs in their AI journey. We spoke ahead of her speaking engagement at TechEx AI & Big Data Expo North America, June 4 and 5, at the Santa Clara Convention Center.Rani is especially attuned to some of the governance, data privacy and sovereignty issues that face enterprises, having spent many years in her career working with numerous clients in the health sector — an area where issues like privacy, data oversight and above all data accuracy are make-or-break aspects of technology deployments.“It’s not enough to just have a prompt engineer or a Python developer. … You still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.” —Rani Radhakrishnan, PwCFrom support to strategy: shifting expectations for AIRani said that there’s a growing enthusiasm from PwC’s clients for AI-powered managed services that can provide both business insights in every sector, and for the technology to be used more proactively, in so-called agentic roles where agents can independently act on data and user input; where autonomous AI agents can take action based on interactions with humans, access to data resources and automation.For example, PwC’s agent OS is a modular AI platform that connects systems and scales intelligent agents into workflows, many times faster than traditional computing methods. It’s an example of how PwC responds to the demand for AI from its clients, many of whom see the potential of this new technology, but lack the in-house expertise and staff to act on their needs.Depending on the sector of the organization, the interest in AI can come from many different places in the business. Proactive monitoring of physical or digital systems; predictive maintenance in manufacturing or engineering; or cost efficiencies won by automation in complex, customer-facing environments, are just a few examples.But regardless of where AI can bring value, most companies don’t yet have in-house the range of skills and people necessary for effective AI deployment — or at least, deployments that achieve ROI and don’t come with significant risk.“It’s not enough to just have a prompt engineer or a Python developer,” Rani said. “You’ve got to put all of these together in a very structured manner, and you still need the human in the loop to curate the right training data sets, review and address any bias in the outputs.”Cleaning house: the data challenge behind AIRani says that effective AI implementations need a mix of technical skills — data engineering, data science, prompt engineering — in combination with an organization’s domain expertise. Internal domain expertise can define the right outcomes, and technical staff can cover the responsible AI practices, like data collation and governance, and confirm that AI systems work responsibly and within company guidelines.“In order to get the most value out of AI, an organization has to get the underlying data right,” she said. “I don’t know of a single company that says its data is in great shape … you’ve got to get it into the right structure and normalize it properly so you can query, analyze, and annotate it and identify emerging trends.”Part of the work enterprises have to put in for effective AI use is the observation for and correction of bias — in both output of AI systems and in the analysis of potential bias inherent in training and operational data.It’s important that as part of the underlying architecture of AI systems, teams apply stringent data sanitization, normalization, and data annotation processes. The latter requires “a lot of human effort,” Rani said, and the skilled personnel required are among the new breed of data professionals that are beginning to emerge.If data and personnel challenges can be overcome, then the feedback loop makes the possible outcomes from generative AI really valuable, Rani said. “Now you have an opportunity with AI prompts to go back and refine the answer that you get. And that’s what makes it so unique and so valuable because now you’re training the model to answer the questions the way you want them answered.”For CIOs, the shift isn’t just about tech enablement. It’s about integrating AI into enterprise architecture, aligning with business strategy, and managing the governance risks that come with scale. CIOs are becoming AI stewards — architecting not just systems, but trust and transformation.ConclusionIt’s only been a few years since AI emerged from its roots in academic computer science research, so it’s understandable that today’s enterprise organizations are, to a certain extent, feeling their way towards realizing AI’s potential.But a new playbook is emerging — one that helps CIOs access the value held in their data reserves, in business strategy, operational improvement, customer-facing experiences and a dozen more areas of the business.As a company that’s steeped in experience with clients large and small from all over the world, PwC is one of the leading choices that decision-makers turn to, to begin or rationalize and direct their existing AI journeys.Explore how PwC is helping CIOs embed AI into core operations, and see Rani’s latest insights at the June TechEx AI & Big Data Expo North America.(Image source: “Network Rack” by one individual is licensed under CC BY-SA 2.0.)
    Like
    Love
    Wow
    Sad
    Angry
    400
    0 Commentarii 0 Distribuiri
  • The modern ROI imperative: AI deployment, security and governance

    Ahead of the TechEx North America event on June 4-5, we’ve been lucky enough to speak to Kieran Norton, Deloitte’s US Cyber AI & Automation leader, who will be one of the speakers at the conference on June 4th. Kieran’s 25+ years in the sector mean that as well as speaking authoritatively on all matters cybersecurity, his most recent roles include advising Deloitte clients on many issues around cybersecurity when using AI in business applications.The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences.However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors.As a Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later.Governance and riskIntegrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran draws a parallel to earlier changes made necessary by new technologies: “I would correlatewith cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.”Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.”These are big subjects, and for the fuller picture, we advocate that readers attend the two talks at TechEx North America that Kieran’s to give. He’ll be exploring both sides of the AI coin – issues around AI deployment for the business, and the methods that companies can implement to deter and detect the new breed of AI-powered malware and attack vectors.The right use-casesKieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineationchatbots have been one of the primary starting placesAs we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.”Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have  significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coveragethat’s not the first use case you want to try.”“If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agencythe interactions become quite complex and present different issues,” he said.In some ways, the issues around automation and system-to-system interfaces have been around for close on a decade. Data silos and RPAchallenges are the hurdles enterprises have been trying to jump for several years. “You still have to know where your data is, know what data you have, have access to itThe fundamentals are still true.”In the AI era, fundamental questions about infrastructure, data visibility, security, and sovereignty are arguably more relevant. Any discussions about AI tend to circle around the same issues, which throws into relief Kieran’s statements that a conversation about AI in the enterprise has to be wide-reaching and concern many of the operational and infrastructural underpinnings of the enterprise.Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold. “If you understand the use caseyou should have a pretty good idea of the ROIand therefore whether or not it’s worth the pain and suffering to go through building it.”At Deloitte, AI is being put to use where there is a clear use case with a measurable return: in the initial triage-ing of SOC tickets. Here the AI acts as a Level I incident analysis engine. “We know how many tickets get generated a dayif we can take 60 to 80% of the time out of the triage process, then that has a significant impact.” Given the technology’s nascence, demarcating a specific area of operations where AI can be used acts as both prototype and proof of effectiveness. The AI is not customer-facing, and there are highly-qualified experts in their fields who can check and oversee the AI’s deliberations.ConclusionKieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value.“You shouldn’t create another programme just for AI security on top of what you’re already doingyou should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations.You can read more about TechEx North America here and sign up to attend. Visit the Deloitte team at booth #153 and drop in on its sessions on June 4: ‘Securing the AI Stack’ on the AI & Big Data stage from 9:20am-9:50am, and ‘Leveraging AI in Cybersecurity for business transformation’ on the Cybersecurity stage, 10:20am – 10:50am.Learn more about Deloitte’s solutions and service offerings for AI in business and cybersecurity or email the team at uscyberai@deloitte.com.
    #modern #roi #imperative #deployment #security
    The modern ROI imperative: AI deployment, security and governance
    Ahead of the TechEx North America event on June 4-5, we’ve been lucky enough to speak to Kieran Norton, Deloitte’s US Cyber AI & Automation leader, who will be one of the speakers at the conference on June 4th. Kieran’s 25+ years in the sector mean that as well as speaking authoritatively on all matters cybersecurity, his most recent roles include advising Deloitte clients on many issues around cybersecurity when using AI in business applications.The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences.However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors.As a Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later.Governance and riskIntegrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran draws a parallel to earlier changes made necessary by new technologies: “I would correlatewith cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.”Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.”These are big subjects, and for the fuller picture, we advocate that readers attend the two talks at TechEx North America that Kieran’s to give. He’ll be exploring both sides of the AI coin – issues around AI deployment for the business, and the methods that companies can implement to deter and detect the new breed of AI-powered malware and attack vectors.The right use-casesKieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineationchatbots have been one of the primary starting placesAs we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.”Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have  significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coveragethat’s not the first use case you want to try.”“If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agencythe interactions become quite complex and present different issues,” he said.In some ways, the issues around automation and system-to-system interfaces have been around for close on a decade. Data silos and RPAchallenges are the hurdles enterprises have been trying to jump for several years. “You still have to know where your data is, know what data you have, have access to itThe fundamentals are still true.”In the AI era, fundamental questions about infrastructure, data visibility, security, and sovereignty are arguably more relevant. Any discussions about AI tend to circle around the same issues, which throws into relief Kieran’s statements that a conversation about AI in the enterprise has to be wide-reaching and concern many of the operational and infrastructural underpinnings of the enterprise.Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold. “If you understand the use caseyou should have a pretty good idea of the ROIand therefore whether or not it’s worth the pain and suffering to go through building it.”At Deloitte, AI is being put to use where there is a clear use case with a measurable return: in the initial triage-ing of SOC tickets. Here the AI acts as a Level I incident analysis engine. “We know how many tickets get generated a dayif we can take 60 to 80% of the time out of the triage process, then that has a significant impact.” Given the technology’s nascence, demarcating a specific area of operations where AI can be used acts as both prototype and proof of effectiveness. The AI is not customer-facing, and there are highly-qualified experts in their fields who can check and oversee the AI’s deliberations.ConclusionKieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value.“You shouldn’t create another programme just for AI security on top of what you’re already doingyou should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations.You can read more about TechEx North America here and sign up to attend. Visit the Deloitte team at booth #153 and drop in on its sessions on June 4: ‘Securing the AI Stack’ on the AI & Big Data stage from 9:20am-9:50am, and ‘Leveraging AI in Cybersecurity for business transformation’ on the Cybersecurity stage, 10:20am – 10:50am.Learn more about Deloitte’s solutions and service offerings for AI in business and cybersecurity or email the team at uscyberai@deloitte.com. #modern #roi #imperative #deployment #security
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    The modern ROI imperative: AI deployment, security and governance
    Ahead of the TechEx North America event on June 4-5, we’ve been lucky enough to speak to Kieran Norton, Deloitte’s US Cyber AI & Automation leader, who will be one of the speakers at the conference on June 4th. Kieran’s 25+ years in the sector mean that as well as speaking authoritatively on all matters cybersecurity, his most recent roles include advising Deloitte clients on many issues around cybersecurity when using AI in business applications.The majority of organisations have in place at least the bare minimum of cybersecurity, and thankfully, in most cases, operate a decently comprehensive raft of cybersecurity measures that cover off communications, data storage, and perimeter defences.However, in the last couple of years, AI has changed the picture, both in terms of how companies can leverage the technology internally, and in how AI is used in cybersecurity – in advanced detection, and in the new ways the tech is used by bad actors.As a Considered a relatively new area, AI, smart automation, data governance and security all inhabit a niche at present. But given the growing presence of AI in the enterprise, those niches are set to become mainstream issues: problems, solutions, and advice that will need to be observed in every organisation, sooner rather than later.Governance and riskIntegrating AI into business processes isn’t solely about the technology and methods for its deployment. Internal processes will need to change to make best use of AI, and to better protect the business that’s using AI daily. Kieran draws a parallel to earlier changes made necessary by new technologies: “I would correlate [AI] with cloud adoption where it was a fairly significant shift. People understood the advantages of it and were moving in that direction, although sometimes it took them more time than others to get there.”Those changes mean casting the net wide, to encompass the update of governance frameworks, establishing secure architectures, even leveraging a new generation of specialists to ensure AI and the data associated with it are used safely and responsibly. Companies actively using AI have to detect and correct bias, test for hallucinations, impose guardrails, manage where, and by whom AI is used, and more. As Kieran puts it: “You probably weren’t doing a lot of testing for hallucination, bias, toxicity, data poisoning, model vulnerabilities, etc. That now has to be part of your process.”These are big subjects, and for the fuller picture, we advocate that readers attend the two talks at TechEx North America that Kieran’s to give. He’ll be exploring both sides of the AI coin – issues around AI deployment for the business, and the methods that companies can implement to deter and detect the new breed of AI-powered malware and attack vectors.The right use-casesKieran advocates that companies start with smaller, lower-risk AI implementations. While some of the first sightings of AI ‘in the wild’ have been chatbots, he was quick to differentiate between a chatbot that can intelligently answer questions from customers, and agents, which can take action by means of triggering interactions with the apps and services the business operates. “So there’s a delineation […] chatbots have been one of the primary starting places […] As we get into agents and agentic, that changes the picture. It also changes the complexity and risk profile.”Customer-facing agentic AI instances are indubitably higher risk, as a misstep can have  significant effects on a brand. “That’s a higher risk scenario. Particularly if the agent is executing financial transactions or making determinations based on healthcare coverage […] that’s not the first use case you want to try.”“If you plug 5, 6, 10, 50, a hundred agents together, you’re getting into a network of agency […] the interactions become quite complex and present different issues,” he said.In some ways, the issues around automation and system-to-system interfaces have been around for close on a decade. Data silos and RPA (robotic process automation) challenges are the hurdles enterprises have been trying to jump for several years. “You still have to know where your data is, know what data you have, have access to it […] The fundamentals are still true.”In the AI era, fundamental questions about infrastructure, data visibility, security, and sovereignty are arguably more relevant. Any discussions about AI tend to circle around the same issues, which throws into relief Kieran’s statements that a conversation about AI in the enterprise has to be wide-reaching and concern many of the operational and infrastructural underpinnings of the enterprise.Kieran therefore emphasises the importance of practicality, and a grounded assessment of need and ability as needing careful examination before AI can gain a foothold. “If you understand the use case […] you should have a pretty good idea of the ROI […] and therefore whether or not it’s worth the pain and suffering to go through building it.”At Deloitte, AI is being put to use where there is a clear use case with a measurable return: in the initial triage-ing of SOC tickets. Here the AI acts as a Level I incident analysis engine. “We know how many tickets get generated a day […] if we can take 60 to 80% of the time out of the triage process, then that has a significant impact.” Given the technology’s nascence, demarcating a specific area of operations where AI can be used acts as both prototype and proof of effectiveness. The AI is not customer-facing, and there are highly-qualified experts in their fields who can check and oversee the AI’s deliberations.ConclusionKieran’s message for business professionals investigating AI uses for their organisations was not to build an AI risk assessment and management programme from scratch. Instead, companies should evolve existing systems, have a clear understanding of each use-case, and avoid the trap of building for theoretical value.“You shouldn’t create another programme just for AI security on top of what you’re already doing […] you should be modernising your programme to address the nuances associated with AI workloads.” Success in AI starts with clear, realistic goals built on solid foundations.You can read more about TechEx North America here and sign up to attend. Visit the Deloitte team at booth #153 and drop in on its sessions on June 4: ‘Securing the AI Stack’ on the AI & Big Data stage from 9:20am-9:50am, and ‘Leveraging AI in Cybersecurity for business transformation’ on the Cybersecurity stage, 10:20am – 10:50am.Learn more about Deloitte’s solutions and service offerings for AI in business and cybersecurity or email the team at uscyberai@deloitte.com.(Image source: “Symposium Cisco Ecole Polytechnique 9-10 April 2018 Artificial Intelligence & Cybersecurity” by Ecole polytechnique / Paris / France is licensed under CC BY-SA 2.0.)
    Like
    Love
    Wow
    Angry
    Sad
    209
    0 Commentarii 0 Distribuiri
  • Diabetes management: IBM and Roche use AI to forecast blood sugar levels

    IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating.AI-powered diabetes managementThe app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings.What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south.For those who live with the anxiety of hypoglycaemia, the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action.Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households.“By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions.How AI is speeding up diabetes researchIt’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed.Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities.The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself.What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise.”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann.“Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.”Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes.“With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.”What this means for the future of healthcare techHaving covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI.For the estimated 590 million peopleworldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights.The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise.If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s.For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #diabetes #management #ibm #roche #use
    Diabetes management: IBM and Roche use AI to forecast blood sugar levels
    IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating.AI-powered diabetes managementThe app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings.What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south.For those who live with the anxiety of hypoglycaemia, the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action.Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households.“By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions.How AI is speeding up diabetes researchIt’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed.Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities.The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself.What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise.”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann.“Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.”Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes.“With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.”What this means for the future of healthcare techHaving covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI.For the estimated 590 million peopleworldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights.The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise.If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s.For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #diabetes #management #ibm #roche #use
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Diabetes management: IBM and Roche use AI to forecast blood sugar levels
    IBM and Roche are teaming up on an AI solution to a challenge faced by millions worldwide: the relentless daily grind of diabetes management. Their new brainchild, the Accu-Chek SmartGuide Predict app, provides AI-powered glucose forecasting capabilities to users. The app doesn’t just track where your glucose levels are—it tells you where they’re heading. Imagine having a weather forecast, but for your blood sugar. That’s essentially what IBM and Roche are creating.AI-powered diabetes managementThe app works alongside Roche’s continuous glucose monitoring sensor, crunching the numbers in real-time to offer predictive insights that can help users stay ahead of potentially dangerous blood sugar swings.What caught my eye were the three standout features that address very specific worries diabetics face. The “Glucose Predict” function visualises where your glucose might be heading over the next two hours—giving you that crucial window to make adjustments before things go south.For those who live with the anxiety of hypoglycaemia (when blood sugar plummets to dangerous levels), the “Low Glucose Predict” feature acts like an early warning system, flagging potential lows up to half an hour before they might occur. That’s enough time to take corrective action.Perhaps most reassuring is the “Night Low Predict” feature, which estimates your risk of overnight hypoglycaemia—often the most frightening prospect for diabetes patients. Before tucking in for the night, the AI-powered diabetes management app gives you a heads-up about whether you might need that bedtime snack. This feature should bring peace of mind to countless households.“By harnessing the power of AI-enabled predictive technology, Roche’s Accu-Chek SmartGuide Predict App can help empower people with diabetes to take proactive measures to manage their disease,” says Moritz Hartmann, Head of Roche Information Solutions.How AI is speeding up diabetes researchIt’s not just patients benefiting from this partnership. The companies have developed a rather clever research tool using IBM’s watsonx AI platform that’s transforming how clinical study data gets analysed.Anyone who’s been involved in clinical research knows the mind-numbing tedium of manual data analysis. IBM and Roche’s tool does the heavy lifting—digitising, translating, and categorising all that anonymised clinical data, then connecting the dots between glucose monitoring data and participants’ daily activities.The result? Researchers can spot meaningful patterns and correlations in a fraction of the time it would normally take. This behind-the-scenes innovation might do more to advance diabetes care and management in the long run than the app itself.What makes this collaboration particularly interesting is how it brings together two different worlds. You’ve got IBM’s computing prowess and AI know-how pairing up with Roche’s decades of healthcare and diabetes expertise.”Our long-standing partnership with IBM underscores the potential of cross-industry innovation in addressing unmet healthcare needs and bringing significant advancements to patients faster,” says Hartmann.“Using cutting-edge technology such as AI and machine learning helps us to accelerate time to market and to improve therapy outcomes at the same time.”Christian Keller, General Manager of IBM Switzerland, added: “The collaboration with Roche underlines the potential of AI when it’s implemented with a clear goal—assisting patients in managing their diabetes.“With our technology and consulting expertise we can offer a trusted, customised, and secure technical environment that is essential to enable innovation in healthcare.”What this means for the future of healthcare techHaving covered healthcare tech for years, I’ve seen plenty of promising innovations fizzle out. However, this IBM-Roche partnership feels promising—perhaps because it’s addressing such a specific, well-defined problem with a thoughtful, targeted application of AI.For the estimated 590 million people (or 1 in 9 of the adult population) worldwide living with diabetes, the shift from reactive to predictive management could be gamechanging. It’s not about replacing human judgment, but enhancing it with timely, actionable insights.The app’s currently only available in Switzerland, which seems a sensible approach—test, refine, and perfect before wider deployment. Healthcare professionals will be keeping tabs on this Swiss rollout to see if it delivers on its promise.If successful, this collaboration could serve as a blueprint for how tech giants and pharma companies might work together on other chronic conditions. Imagine similar predictive approaches for heart disease, asthma, or Parkinson’s.For now, though, the focus is squarely on using AI to improve diabetes management and helping people sleep a little easier at night—quite literally, in the case of that clever nocturnal prediction feature. And honestly, that’s a worthwhile enough goal on its own.(Photo by Alexander Grey)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Telegram and xAI forge Grok AI deal

    Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform.This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app.This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy.In fact, Telegram is gearing up to raise at least billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service.And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a million loss. Fast forward to 2024, and they’d flipped that on its head, banking a million profit from billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of million from a billion revenue pot.So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp.However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water.Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content.Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company.And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam.Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users.The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage?Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #telegram #xai #forge #grok #deal
    Telegram and xAI forge Grok AI deal
    Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform.This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app.This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy.In fact, Telegram is gearing up to raise at least billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service.And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a million loss. Fast forward to 2024, and they’d flipped that on its head, banking a million profit from billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of million from a billion revenue pot.So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp.However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water.Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content.Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company.And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam.Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users.The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage?Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #telegram #xai #forge #grok #deal
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Telegram and xAI forge Grok AI deal
    Telegram has forged a deal with Elon Musk’s xAI to weave Grok AI into the fabric of the encrypted messaging platform.This isn’t just a friendly collaboration; xAI is putting serious money on the table – a cool $300 million, a mix of hard cash and equity. And for Telegram, they’ll pocket 50% of any subscription money Grok pulls in through their app.This leap into the world of AI couldn’t come at a more interesting time for Telegram. While CEO Pavel Durov is wrestling with some pretty serious legal headaches, and governments in certain corners of the globe are giving the platform the side-eye, the company’s bank balance is looking healthy.In fact, Telegram is gearing up to raise at least $1.5 billion by issuing five-year bonds. With a rather tempting 9% yield, these bonds are also designed to help buy back some of the debt from their 2021 bond issue. It seems big-name investors like BlackRock, Mubadala, and Citadel are still keen, suggesting they see a bright future for the messaging service.And the numbers do tell a story of a significant comeback. Cast your mind back to 2023, and Telegram was nursing a $173 million loss. Fast forward to 2024, and they’d flipped that on its head, banking a $540 million profit from $1.4 billion in revenue. They’re not stopping there either, with optimistic forecasts for 2025 pointing to profits north of $700 million from a $2 billion revenue pot.So, what will Grok actually do for Telegram users? The hope is that xAI’s conversational AI will bring a whole new layer of smarts to the platform. This includes supercharged information searching, help with drafting messages, and all sorts of automated tricks. It’s a play that could help Telegram unlock fresh monetisation opportunities and compete with Meta bringing Llama-powered smarts to WhatsApp.However, Telegram’s integration of AI is all happening against a pretty dramatic backdrop. Pavel Durov, the man at the company’s helm, has found himself in hot water.Back in August 2024, Durov was arrested in France and later indicted on a dozen charges. These aren’t minor infringements either; they include serious accusations like complicity in spreading child exploitation material and drug trafficking, all linked to claims that Telegram wasn’t doing enough to police its content.Durov was initially stuck in France, but by March 2025, he was given the nod to leave the country, at least for a while. What happens next with these legal battles is anyone’s guess, but it’s a massive cloud hanging over the company.And it’s not just personal legal woes for Durov. Entire governments are starting to lose patience. Vietnam, for instance, has had its Ministry of Science and Technology order internet providers to pull the plug on Telegram. Their reasoning? They say the platform has become a hotbed for crime. Vietnamese officials reckon 68% of Telegram channels and groups in the country are up to no good, involved in everything from fraud to drug deals. Telegram, for its part, said it was taken aback by the move, insisting it had always tried to play ball with legal requests from Vietnam.Back to the xAI partnership, it’s a clear signal of Telegram looking to the future and seeing AI as a core pillar of it. The money involved and the promise of shared revenues show just how much potential both sides see in getting Grok into the hands of Telegram’s millions of users.The next twelve months will be a real test for Telegram. Can the company innovate its way forward while also showing it can be a responsible player on the global stage?(Photo from Unsplash)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Huawei Supernode 384 disrupts Nvidia’s AI market hold

    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #huawei #supernode #disrupts #nvidias #market
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #huawei #supernode #disrupts #nvidias #market
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts models (machine-learning systems using multiple specialised sub-networks to solve complex computational challenges.)Huawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.(Image from Pixabay)See also: Oracle plans $40B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • DeepSeek’s latest AI model a ‘big step backwards’ for free speech

    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community canaddress this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #deepseeks #latest #model #big #step
    DeepSeek’s latest AI model a ‘big step backwards’ for free speech
    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community canaddress this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #deepseeks #latest #model #big #step
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    DeepSeek’s latest AI model a ‘big step backwards’ for free speech
    DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it upAI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.China criticism? Computer says noThis pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.“The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.(Photo by John Cameron)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Salesforce to buy Informatica in $8B deal

    Salesforce has agreed to acquire data management firm Informatica in a deal valued at around billion. This includes equity value, minus Salesforce’s existing investment in the company. Informatica shareholders will receive in cash per share.

    The move aims to help Salesforce build a stronger foundation for AI tools that can act on their own, often called agentic AI. Informatica’s software is known for helping businesses collect, manage, and organise large sets of data – the kind of support Salesforce needs to improve its AI systems’ work in different business applications.

    The deal brings together tools for organising and cleaning datawith Salesforce’s cloud platform. The idea is to make sure any AI features running on Salesforce have access to organised and secure data.

    For companies using AI in daily operations, having the right data isn’t enough. They also need to know where that data came from, how it has been changed, and whether it can be trusted. That’s where Informatica’s tools come in with benefits such as:

    Transparency: Informatica can show how data flows through systems, helping companies meet audit or regulatory needs.

    Context: By combining Informatica’s metadata with Salesforce’s data models, AI agents will better understand how to connect the dots in business systems.

    Governance: Features like data quality controls and policy settings help make sure AI systems rely on clean and consistent data.

    Salesforce CEO Marc Benioff said the acquisition supports the company’s goal of building safe and responsible AI for business use. “We’re excited to acquire Informatica … Together, we’ll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360,” Benioff said.

    Informatica CEO Amit Walia said joining Salesforce will help more businesses make better use of their data.

    How this helps Salesforce’s data products

    Informatica’s cloud tools will plug directly into Salesforce’s core products:

    Data cloud: Informatica will help ensure data collected is trustworthy and ready to use – not just gathered in one place.

    Agentforce: AI agents should be able to make smarter decisions with cleaner data and better understanding of business context.

    Customer 360: Salesforce CRM tools will gain data inputs, helping sales and support teams.

    MuleSoft: With Informatica’s data quality and governance tools, the data passing through MuleSoft APIs should be more reliable.

    Tableau: Users of Tableau will benefit from more detailed information, as the data behind the dashboards should be better organised and easier to understand.

    Steve Fisher, President and CTO at Salesforce, explained the value: “Imagine an AI agent that goes beyond simply seeing data points to understand their full context – origin, transformation, quality, and governance.”

    Salesforce plans to bring Informatica’s technology into its existing systems quickly after the deal closes. This includes integrating data quality, governance, and MDM features into Agentforce and Data Cloud.

    The company also said it will continue to support Informatica’s current strategy to build AI-driven data tools for use in different cloud environments.

    Informatica acquisition aligns with Salesforce’s strategy

    Salesforce executives described the acquisition as part of a long-term plan.

    Robin Washington, President and CFO, said the company targets deals like this one when it sees a clear fit for customers and a solid financial return. “We’re laser-focused on accelerated execution,” she said, pointing to sectors like government, healthcare, and finance, where the combined tools could have most impact.

    Informatica’s chairman Bruce Chizen said the deal shows how long-term investment strategies can pay off. He credited private equity backers Permira and CPP Investments for their role in guiding the company toward this outcome.

    Salesforce also said it plans to invest in Informatica’s partner network and apply its own sales and marketing muscle to grow Informatica’s cloud business further.

    Deal terms and next steps

    The boards of both companies have approved the transaction. Shareholders representing about 63% of Informatica’s voting shares have signed off and no further votes are needed. The deal is expected to close early in Salesforce’s 2027 fiscal year, pending regulatory approval and other conditions.

    Salesforce will pay for the deal using a mix of cash and new debt. The company expects the deal to add to its non-GAAP earnings, margin, and cash flow starting in the second year after closing. It does not plan to change its shareholder return plans as a result of the acquisition.See also: Oracle plans B Nvidia chip deal for AI facility in Texas

    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

    Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    The post Salesforce to buy Informatica in B deal appeared first on AI News.
    #salesforce #buy #informatica #deal
    Salesforce to buy Informatica in $8B deal
    Salesforce has agreed to acquire data management firm Informatica in a deal valued at around billion. This includes equity value, minus Salesforce’s existing investment in the company. Informatica shareholders will receive in cash per share. The move aims to help Salesforce build a stronger foundation for AI tools that can act on their own, often called agentic AI. Informatica’s software is known for helping businesses collect, manage, and organise large sets of data – the kind of support Salesforce needs to improve its AI systems’ work in different business applications. The deal brings together tools for organising and cleaning datawith Salesforce’s cloud platform. The idea is to make sure any AI features running on Salesforce have access to organised and secure data. For companies using AI in daily operations, having the right data isn’t enough. They also need to know where that data came from, how it has been changed, and whether it can be trusted. That’s where Informatica’s tools come in with benefits such as: Transparency: Informatica can show how data flows through systems, helping companies meet audit or regulatory needs. Context: By combining Informatica’s metadata with Salesforce’s data models, AI agents will better understand how to connect the dots in business systems. Governance: Features like data quality controls and policy settings help make sure AI systems rely on clean and consistent data. Salesforce CEO Marc Benioff said the acquisition supports the company’s goal of building safe and responsible AI for business use. “We’re excited to acquire Informatica … Together, we’ll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360,” Benioff said. Informatica CEO Amit Walia said joining Salesforce will help more businesses make better use of their data. How this helps Salesforce’s data products Informatica’s cloud tools will plug directly into Salesforce’s core products: Data cloud: Informatica will help ensure data collected is trustworthy and ready to use – not just gathered in one place. Agentforce: AI agents should be able to make smarter decisions with cleaner data and better understanding of business context. Customer 360: Salesforce CRM tools will gain data inputs, helping sales and support teams. MuleSoft: With Informatica’s data quality and governance tools, the data passing through MuleSoft APIs should be more reliable. Tableau: Users of Tableau will benefit from more detailed information, as the data behind the dashboards should be better organised and easier to understand. Steve Fisher, President and CTO at Salesforce, explained the value: “Imagine an AI agent that goes beyond simply seeing data points to understand their full context – origin, transformation, quality, and governance.” Salesforce plans to bring Informatica’s technology into its existing systems quickly after the deal closes. This includes integrating data quality, governance, and MDM features into Agentforce and Data Cloud. The company also said it will continue to support Informatica’s current strategy to build AI-driven data tools for use in different cloud environments. Informatica acquisition aligns with Salesforce’s strategy Salesforce executives described the acquisition as part of a long-term plan. Robin Washington, President and CFO, said the company targets deals like this one when it sees a clear fit for customers and a solid financial return. “We’re laser-focused on accelerated execution,” she said, pointing to sectors like government, healthcare, and finance, where the combined tools could have most impact. Informatica’s chairman Bruce Chizen said the deal shows how long-term investment strategies can pay off. He credited private equity backers Permira and CPP Investments for their role in guiding the company toward this outcome. Salesforce also said it plans to invest in Informatica’s partner network and apply its own sales and marketing muscle to grow Informatica’s cloud business further. Deal terms and next steps The boards of both companies have approved the transaction. Shareholders representing about 63% of Informatica’s voting shares have signed off and no further votes are needed. The deal is expected to close early in Salesforce’s 2027 fiscal year, pending regulatory approval and other conditions. Salesforce will pay for the deal using a mix of cash and new debt. The company expects the deal to add to its non-GAAP earnings, margin, and cash flow starting in the second year after closing. It does not plan to change its shareholder return plans as a result of the acquisition.See also: Oracle plans B Nvidia chip deal for AI facility in Texas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce to buy Informatica in B deal appeared first on AI News. #salesforce #buy #informatica #deal
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Salesforce to buy Informatica in $8B deal
    Salesforce has agreed to acquire data management firm Informatica in a deal valued at around $8 billion. This includes equity value, minus Salesforce’s existing investment in the company. Informatica shareholders will receive $25 in cash per share. The move aims to help Salesforce build a stronger foundation for AI tools that can act on their own, often called agentic AI. Informatica’s software is known for helping businesses collect, manage, and organise large sets of data – the kind of support Salesforce needs to improve its AI systems’ work in different business applications. The deal brings together tools for organising and cleaning data (like Master Data Management and data integration) with Salesforce’s cloud platform. The idea is to make sure any AI features running on Salesforce have access to organised and secure data. For companies using AI in daily operations, having the right data isn’t enough. They also need to know where that data came from, how it has been changed, and whether it can be trusted. That’s where Informatica’s tools come in with benefits such as: Transparency: Informatica can show how data flows through systems, helping companies meet audit or regulatory needs. Context: By combining Informatica’s metadata with Salesforce’s data models, AI agents will better understand how to connect the dots in business systems. Governance: Features like data quality controls and policy settings help make sure AI systems rely on clean and consistent data. Salesforce CEO Marc Benioff said the acquisition supports the company’s goal of building safe and responsible AI for business use. “We’re excited to acquire Informatica … Together, we’ll supercharge Agentforce, Data Cloud, Tableau, MuleSoft, and Customer 360,” Benioff said. Informatica CEO Amit Walia said joining Salesforce will help more businesses make better use of their data. How this helps Salesforce’s data products Informatica’s cloud tools will plug directly into Salesforce’s core products: Data cloud: Informatica will help ensure data collected is trustworthy and ready to use – not just gathered in one place. Agentforce: AI agents should be able to make smarter decisions with cleaner data and better understanding of business context. Customer 360: Salesforce CRM tools will gain data inputs, helping sales and support teams. MuleSoft: With Informatica’s data quality and governance tools, the data passing through MuleSoft APIs should be more reliable. Tableau: Users of Tableau will benefit from more detailed information, as the data behind the dashboards should be better organised and easier to understand. Steve Fisher, President and CTO at Salesforce, explained the value: “Imagine an AI agent that goes beyond simply seeing data points to understand their full context – origin, transformation, quality, and governance.” Salesforce plans to bring Informatica’s technology into its existing systems quickly after the deal closes. This includes integrating data quality, governance, and MDM features into Agentforce and Data Cloud. The company also said it will continue to support Informatica’s current strategy to build AI-driven data tools for use in different cloud environments. Informatica acquisition aligns with Salesforce’s strategy Salesforce executives described the acquisition as part of a long-term plan. Robin Washington, President and CFO, said the company targets deals like this one when it sees a clear fit for customers and a solid financial return. “We’re laser-focused on accelerated execution,” she said, pointing to sectors like government, healthcare, and finance, where the combined tools could have most impact. Informatica’s chairman Bruce Chizen said the deal shows how long-term investment strategies can pay off. He credited private equity backers Permira and CPP Investments for their role in guiding the company toward this outcome. Salesforce also said it plans to invest in Informatica’s partner network and apply its own sales and marketing muscle to grow Informatica’s cloud business further. Deal terms and next steps The boards of both companies have approved the transaction. Shareholders representing about 63% of Informatica’s voting shares have signed off and no further votes are needed. The deal is expected to close early in Salesforce’s 2027 fiscal year, pending regulatory approval and other conditions. Salesforce will pay for the deal using a mix of cash and new debt. The company expects the deal to add to its non-GAAP earnings, margin, and cash flow starting in the second year after closing. It does not plan to change its shareholder return plans as a result of the acquisition. (Image from Pixabay) See also: Oracle plans $40B Nvidia chip deal for AI facility in Texas Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here. The post Salesforce to buy Informatica in $8B deal appeared first on AI News.
    0 Commentarii 0 Distribuiri
  • Odyssey’s AI model transforms video into interactive worlds

    London-based AI lab Odyssey has launched a research preview of a model transforming video into interactive worlds. Initially focusing on world models for film and game production, the Odyssey team has stumbled onto potentially a completely new entertainment medium.The interactive video generated by Odyssey’s AI model responds to inputs in real-time. You can interact with it using your keyboard, phone, controller, or eventually even voice commands. The folks at Odyssey are billing it as an “early version of the Holodeck.”The underlying AI can generate realistic-looking video frames every 40 milliseconds. That means when you press a button or make a gesture, the video responds almost instantly—creating the illusion that you’re actually influencing this digital world.“The experience today feels like exploring a glitchy dream—raw, unstable, but undeniably new,” according to Odyssey. We’re not talking about polished, AAA-game quality visuals here, at least not yet.Not your standard video techLet’s get a bit technical for a moment. What makes this AI-generated interactive video tech different from, say, a standard video game or CGI? It all comes down to something Odyssey calls a “world model.”Unlike traditional video models that generate entire clips in one go, world models work frame-by-frame to predict what should come next based on the current state and any user inputs. It’s similar to how large language models predict the next word in a sequence, but infinitely more complex because we’re talking about high-resolution video frames rather than words.“A world model is, at its core, an action-conditioned dynamics model,” as Odyssey puts it. Each time you interact, the model takes the current state, your action, and the history of what’s happened, then generates the next video frame accordingly.The result is something that feels more organic and unpredictable than a traditional game. There’s no pre-programmed logic saying “if a player does X, then Y happens”—instead, the AI is making its best guess at what should happen next based on what it’s learned from watching countless videos.Odyssey tackles historic challenges with AI-generated videoBuilding something like this isn’t exactly a walk in the park. One of the biggest hurdles with AI-generated interactive video is keeping it stable over time. When you’re generating each frame based on previous ones, small errors can compound quicklyTo tackle this, Odyssey has used what they term a “narrow distribution model”—essentially pre-training their AI on general video footage, then fine-tuning it on a smaller set of environments. This trade-off means less variety but better stability so everything doesn’t become a bizarre mess.The company says they’re already making “fast progress” on their next-gen model, which apparently shows “a richer range of pixels, dynamics, and actions.”Running all this fancy AI tech in real-time isn’t cheap. Currently, the infrastructure powering this experience costs between £0.80-£1.60per user-hour, relying on clusters of H100 GPUs scattered across the US and EU.That might sound expensive for streaming video, but it’s remarkably cheap compared to producing traditional game or film content. And Odyssey expects these costs to tumble further as models become more efficient.Interactive video: The next storytelling medium?Throughout history, new technologies have given birth to new forms of storytelling—from cave paintings to books, photography, radio, film, and video games. Odyssey believes AI-generated interactive video is the next step in this evolution.If they’re right, we might be looking at the prototype of something that will transform entertainment, education, advertising, and more. Imagine training videos where you can practice the skills being taught, or travel experiences where you can explore destinations from your sofa.The research preview available now is obviously just a small step towards this vision and more of a proof of concept than a finished product. However, it’s an intriguing glimpse at what might be possible when AI-generated worlds become interactive playgrounds rather than just passive experiences.You can give the research preview a try here.See also: Telegram and xAI forge Grok AI dealWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #odysseys #model #transforms #video #into
    Odyssey’s AI model transforms video into interactive worlds
    London-based AI lab Odyssey has launched a research preview of a model transforming video into interactive worlds. Initially focusing on world models for film and game production, the Odyssey team has stumbled onto potentially a completely new entertainment medium.The interactive video generated by Odyssey’s AI model responds to inputs in real-time. You can interact with it using your keyboard, phone, controller, or eventually even voice commands. The folks at Odyssey are billing it as an “early version of the Holodeck.”The underlying AI can generate realistic-looking video frames every 40 milliseconds. That means when you press a button or make a gesture, the video responds almost instantly—creating the illusion that you’re actually influencing this digital world.“The experience today feels like exploring a glitchy dream—raw, unstable, but undeniably new,” according to Odyssey. We’re not talking about polished, AAA-game quality visuals here, at least not yet.Not your standard video techLet’s get a bit technical for a moment. What makes this AI-generated interactive video tech different from, say, a standard video game or CGI? It all comes down to something Odyssey calls a “world model.”Unlike traditional video models that generate entire clips in one go, world models work frame-by-frame to predict what should come next based on the current state and any user inputs. It’s similar to how large language models predict the next word in a sequence, but infinitely more complex because we’re talking about high-resolution video frames rather than words.“A world model is, at its core, an action-conditioned dynamics model,” as Odyssey puts it. Each time you interact, the model takes the current state, your action, and the history of what’s happened, then generates the next video frame accordingly.The result is something that feels more organic and unpredictable than a traditional game. There’s no pre-programmed logic saying “if a player does X, then Y happens”—instead, the AI is making its best guess at what should happen next based on what it’s learned from watching countless videos.Odyssey tackles historic challenges with AI-generated videoBuilding something like this isn’t exactly a walk in the park. One of the biggest hurdles with AI-generated interactive video is keeping it stable over time. When you’re generating each frame based on previous ones, small errors can compound quicklyTo tackle this, Odyssey has used what they term a “narrow distribution model”—essentially pre-training their AI on general video footage, then fine-tuning it on a smaller set of environments. This trade-off means less variety but better stability so everything doesn’t become a bizarre mess.The company says they’re already making “fast progress” on their next-gen model, which apparently shows “a richer range of pixels, dynamics, and actions.”Running all this fancy AI tech in real-time isn’t cheap. Currently, the infrastructure powering this experience costs between £0.80-£1.60per user-hour, relying on clusters of H100 GPUs scattered across the US and EU.That might sound expensive for streaming video, but it’s remarkably cheap compared to producing traditional game or film content. And Odyssey expects these costs to tumble further as models become more efficient.Interactive video: The next storytelling medium?Throughout history, new technologies have given birth to new forms of storytelling—from cave paintings to books, photography, radio, film, and video games. Odyssey believes AI-generated interactive video is the next step in this evolution.If they’re right, we might be looking at the prototype of something that will transform entertainment, education, advertising, and more. Imagine training videos where you can practice the skills being taught, or travel experiences where you can explore destinations from your sofa.The research preview available now is obviously just a small step towards this vision and more of a proof of concept than a finished product. However, it’s an intriguing glimpse at what might be possible when AI-generated worlds become interactive playgrounds rather than just passive experiences.You can give the research preview a try here.See also: Telegram and xAI forge Grok AI dealWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #odysseys #model #transforms #video #into
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Odyssey’s AI model transforms video into interactive worlds
    London-based AI lab Odyssey has launched a research preview of a model transforming video into interactive worlds. Initially focusing on world models for film and game production, the Odyssey team has stumbled onto potentially a completely new entertainment medium.The interactive video generated by Odyssey’s AI model responds to inputs in real-time. You can interact with it using your keyboard, phone, controller, or eventually even voice commands. The folks at Odyssey are billing it as an “early version of the Holodeck.”The underlying AI can generate realistic-looking video frames every 40 milliseconds. That means when you press a button or make a gesture, the video responds almost instantly—creating the illusion that you’re actually influencing this digital world.“The experience today feels like exploring a glitchy dream—raw, unstable, but undeniably new,” according to Odyssey. We’re not talking about polished, AAA-game quality visuals here, at least not yet.Not your standard video techLet’s get a bit technical for a moment. What makes this AI-generated interactive video tech different from, say, a standard video game or CGI? It all comes down to something Odyssey calls a “world model.”Unlike traditional video models that generate entire clips in one go, world models work frame-by-frame to predict what should come next based on the current state and any user inputs. It’s similar to how large language models predict the next word in a sequence, but infinitely more complex because we’re talking about high-resolution video frames rather than words.“A world model is, at its core, an action-conditioned dynamics model,” as Odyssey puts it. Each time you interact, the model takes the current state, your action, and the history of what’s happened, then generates the next video frame accordingly.The result is something that feels more organic and unpredictable than a traditional game. There’s no pre-programmed logic saying “if a player does X, then Y happens”—instead, the AI is making its best guess at what should happen next based on what it’s learned from watching countless videos.Odyssey tackles historic challenges with AI-generated videoBuilding something like this isn’t exactly a walk in the park. One of the biggest hurdles with AI-generated interactive video is keeping it stable over time. When you’re generating each frame based on previous ones, small errors can compound quickly (a phenomenon AI researchers call “drift.”)To tackle this, Odyssey has used what they term a “narrow distribution model”—essentially pre-training their AI on general video footage, then fine-tuning it on a smaller set of environments. This trade-off means less variety but better stability so everything doesn’t become a bizarre mess.The company says they’re already making “fast progress” on their next-gen model, which apparently shows “a richer range of pixels, dynamics, and actions.”Running all this fancy AI tech in real-time isn’t cheap. Currently, the infrastructure powering this experience costs between £0.80-£1.60 (1-2) per user-hour, relying on clusters of H100 GPUs scattered across the US and EU.That might sound expensive for streaming video, but it’s remarkably cheap compared to producing traditional game or film content. And Odyssey expects these costs to tumble further as models become more efficient.Interactive video: The next storytelling medium?Throughout history, new technologies have given birth to new forms of storytelling—from cave paintings to books, photography, radio, film, and video games. Odyssey believes AI-generated interactive video is the next step in this evolution.If they’re right, we might be looking at the prototype of something that will transform entertainment, education, advertising, and more. Imagine training videos where you can practice the skills being taught, or travel experiences where you can explore destinations from your sofa.The research preview available now is obviously just a small step towards this vision and more of a proof of concept than a finished product. However, it’s an intriguing glimpse at what might be possible when AI-generated worlds become interactive playgrounds rather than just passive experiences.You can give the research preview a try here.See also: Telegram and xAI forge Grok AI dealWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • UK deploys AI to boost Arctic security amid growing threats

    The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today.The deployment is seen as a signal of the UK’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications.The national security of the UK is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the UK.Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and UK security. “We cannot bolster the UK’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the UK and Europe, and helping fund Russia’s aggressive activity.”British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security.Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states.During the Icelandic stop of his tour, Lammy will unveil a UK-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities.As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns.Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his illegal war in Ukraine.Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills.The UK’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations.“It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy.“That’s why we have today announced new UK funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.”Throughout his Arctic tour, the Foreign Secretary will be emphasising the UK’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the UK and much of Europe.These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the UK’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see UK defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions.The significance of maritime security and the Arctic is also recognised in the UK’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place.In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #deploys #boost #arctic #security #amid
    UK deploys AI to boost Arctic security amid growing threats
    The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today.The deployment is seen as a signal of the UK’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications.The national security of the UK is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the UK.Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and UK security. “We cannot bolster the UK’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the UK and Europe, and helping fund Russia’s aggressive activity.”British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security.Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states.During the Icelandic stop of his tour, Lammy will unveil a UK-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities.As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns.Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his illegal war in Ukraine.Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills.The UK’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations.“It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy.“That’s why we have today announced new UK funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.”Throughout his Arctic tour, the Foreign Secretary will be emphasising the UK’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the UK and much of Europe.These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the UK’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see UK defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions.The significance of maritime security and the Arctic is also recognised in the UK’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place.In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #deploys #boost #arctic #security #amid
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    UK deploys AI to boost Arctic security amid growing threats
    The UK is deploying AI to keep a watchful eye on Arctic security threats from hostile states amid growing geopolitical tensions. This will be underscored by Foreign Secretary David Lammy during his visit to the region, which kicks off today.The deployment is seen as a signal of the UK’s commitment to leveraging technology to navigate an increasingly complex global security landscape. For Britain, what unfolds in the territories of two of its closest Arctic neighbours – Norway and Iceland – has direct and profound implications.The national security of the UK is linked to stability in the High North. The once remote and frozen expanse is changing, and with it, the security calculus for the UK.Foreign Secretary David Lammy said: “The Arctic is becoming an increasingly important frontier for geopolitical competition and trade, and a key flank for European and UK security. “We cannot bolster the UK’s defence and deliver the Plan for Change without greater security in the Arctic. This is a region where Russia’s shadowfleet operates, threatening critical infrastructure like undersea cables to the UK and Europe, and helping fund Russia’s aggressive activity.”British and Norwegian naval vessels conduct vital joint patrols in the Arctic. These missions are at the sharp end of efforts to detect, deter, and manage the increasing subsea threats that loom over vital energy supplies, national infrastructure, and broader regional security.Russia’s Northern Fleet, in particular, presents a persistent challenge in these icy waters. This high-level engagement follows closely on the heels of the Prime Minister’s visit to Norway earlier this month for a Joint Expeditionary Force meeting, where further support for Ukraine was a key talking point with allies from the Baltic and Scandinavian states.During the Icelandic stop of his tour, Lammy will unveil a UK-Iceland tech partnership to boost Arctic security. This new scheme is designed to harness AI technologies for monitoring hostile activity across this vast and challenging region. It’s a forward-looking strategy, acknowledging that as the Arctic opens up, so too do the opportunities for those who might seek to exploit its vulnerabilities.As global temperatures climb and the ancient ice caps continue their retreat, previously impassable shipping routes are emerging. This is not just a matter for climate scientists; it’s redrawing geopolitical maps. The Arctic is fast becoming an arena of increased competition, with nations eyeing newly accessible reserves of gas, oil, and precious minerals. Unsurprisingly, this scramble for resources is cranking up security concerns.Adding another layer of complexity, areas near the Arctic are being actively used by Russia’s fleet of nuclear-powered icebreakers. Putin’s vessels are crucial to his “High North” strategy, carving paths for tankers that, in turn, help to bankroll his illegal war in Ukraine.Such operations cast a long shadow, threatening not only maritime security but also the delicate Arctic environment. Reports suggest Putin has been forced to rely on “dodgy and decaying vessels,” which frequently suffer breakdowns and increase the risk of devastating oil spills.The UK’s defence partnership with Norway is deeply rooted, with British troops undertaking vital Arctic training in the country for over half a century. This enduring collaboration is now being elevated through an agreement to fortify the security of both nations.“It’s more important than ever that we work with our allies in the High North, like Norway and Iceland, to enhance our ability to patrol and protect these waters,” added Lammy.“That’s why we have today announced new UK funding to work more closely with Iceland, using AI to bolster our ability to monitor and detect hostile state activity in the Arctic.”Throughout his Arctic tour, the Foreign Secretary will be emphasising the UK’s role in securing NATO’s northern flank. This includes the often unseen but hugely significant task of protecting the region’s critical undersea infrastructure – the cables and pipelines that are the lifelines for stable energy supplies and telecoms for the UK and much of Europe.These targeted Arctic security initiatives are part and parcel of a broader, robust enhancement of the UK’s overall defence posture. Earlier this year, the Prime Minister announced the most significant sustained increase in defence spending since the Cold War. This will see UK defence expenditure climb to 2.5% of GDP by April 2027, with a clear ambition to reach 3% in the next Parliament, contingent on economic and fiscal conditions.The significance of maritime security and the Arctic is also recognised in the UK’s ambitious new Security and Defence Partnership with the EU, agreed last week. This pact commits both sides to closer collaboration to make Europe a safer place.In today’s interconnected world, security, climate action, and international collaboration are inextricably linked. The turn to AI isn’t just a tech upgrade; it’s a strategic necessity.(Photo by Annie Spratt)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Details leak of Jony Ive’s ambitious OpenAI device

    After what felt like an age of tech industry tea-leaf reading, OpenAI has officially snapped up “io,” the much-buzzed-about startup building an AI device from former Apple design guru Jony Ive and OpenAI’s chief, Sam Altman. The price tag? billion.OpenAI put out a video this week talking about the Ive and Altman venture in a general sort of way, but now, a few more tidbits about what they’re actually cooking have slipped out.And what are they planning with all that cash and brainpower? Well, the eagle-eyed folks at The Washington Post spotted an internal chat between Sam Altman and OpenAI staff where he set a target of shipping 100 million AI “companions.”Altman allegedly even told his team the OpenAI device is “the chance to do the biggest thing we’ve ever done as a company here.”To be clear, Altman has set that 100 million number as an eventual target. “We’re not going to ship 100 million devices literally on day one,” he said. But then, in a flex that’s pure Silicon Valley, he added they’d hit that 100 million mark “faster than any company has ever shipped 100 million of something new before.”So, what is this mysterious “companion”? The gadget is designed to be entirely aware of a user’s surroundings, and even their “life.” While they’ve mostly talked about a single device, Altman did let slip it might be more of a “family of devices.”Jony Ive, as expected, dubbed it “a new design movement.” You can almost hear the minimalist manifesto being drafted.Why the full-blown acquisition, though? Weren’t they just going to partner up? Originally, yes. The plan was for Ive’s startup to cook up the hardware and sell it, with OpenAI delivering the brains. But it seems the vision got bigger. This isn’t just another accessory, you see.Altman stressed the device will be a “central facet of using OpenAI.” He even said, “We both got excited about the idea that, if you subscribed to ChatGPT, we should just mail you new computers, and you should use those.”Frankly, they reckon our current tech – our trusty laptops, the websites we browse – just isn’t up to snuff for the kind of AI experiences they’re dreaming of. Altman was pretty blunt, saying current use of AI “is not the sci-fi dream of what AI could do to enable you in all the ways that I think the models are capable of.”So, we know it’s not a smartphone. Altman’s also put the kibosh on it being a pair of glasses. And Jony Ive, well, he’s apparently not rushing to make another wearable, which makes sense given his design ethos.The good news for the impatient among usis that this isn’t just vapourware. Ive’s team has an actual prototype. Altman’s even taken one home to “live with it”. As for when we might get our hands on one? Altman’s reportedly aiming for a late 2026 release.Naturally, OpenAI is keeping the actual device under wraps, but you can always count on supply chain whispers for a few clues. The ever-reliableApple supply chain analyst Ming-Chi Kuo has thrown a few alleged design details into the ring via social media.Kuo reckons it’ll be “slightly larger” than the Humane AI Pin, but that it will look “as compact and elegant as an iPod Shuffle.” And yes, like the Shuffle, Kuo says no screen.According to Kuo, the device will chat with your phone and computer instead, using good old-fashioned microphones for your voice and cameras to see what’s going on around you. Interestingly, he suggests it’ll be worn around the neck, necklace-style, rather than clipped on like the AI Pin.Kuo’s crystal ball points to mass production in 2027, but he wisely adds a pinch of salt, noting the final look and feel could still change.So, the billion-dollarquestion remains: will this OpenAI device be the next big thing, the gamechanger we’ve been waiting for? Or will it be another noble-but-failed attempt to break free from the smartphone’s iron grip, joining the likes of the AI Pin in the ‘great ideas that didn’t quite make it’ pile?Altman, for one, is brimming with confidence. Having lived with the prototype, he’s gone on record saying he believes it will be “the coolest piece of technology that the world will have ever seen.”Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #details #leak #jony #ives #ambitious
    Details leak of Jony Ive’s ambitious OpenAI device
    After what felt like an age of tech industry tea-leaf reading, OpenAI has officially snapped up “io,” the much-buzzed-about startup building an AI device from former Apple design guru Jony Ive and OpenAI’s chief, Sam Altman. The price tag? billion.OpenAI put out a video this week talking about the Ive and Altman venture in a general sort of way, but now, a few more tidbits about what they’re actually cooking have slipped out.And what are they planning with all that cash and brainpower? Well, the eagle-eyed folks at The Washington Post spotted an internal chat between Sam Altman and OpenAI staff where he set a target of shipping 100 million AI “companions.”Altman allegedly even told his team the OpenAI device is “the chance to do the biggest thing we’ve ever done as a company here.”To be clear, Altman has set that 100 million number as an eventual target. “We’re not going to ship 100 million devices literally on day one,” he said. But then, in a flex that’s pure Silicon Valley, he added they’d hit that 100 million mark “faster than any company has ever shipped 100 million of something new before.”So, what is this mysterious “companion”? The gadget is designed to be entirely aware of a user’s surroundings, and even their “life.” While they’ve mostly talked about a single device, Altman did let slip it might be more of a “family of devices.”Jony Ive, as expected, dubbed it “a new design movement.” You can almost hear the minimalist manifesto being drafted.Why the full-blown acquisition, though? Weren’t they just going to partner up? Originally, yes. The plan was for Ive’s startup to cook up the hardware and sell it, with OpenAI delivering the brains. But it seems the vision got bigger. This isn’t just another accessory, you see.Altman stressed the device will be a “central facet of using OpenAI.” He even said, “We both got excited about the idea that, if you subscribed to ChatGPT, we should just mail you new computers, and you should use those.”Frankly, they reckon our current tech – our trusty laptops, the websites we browse – just isn’t up to snuff for the kind of AI experiences they’re dreaming of. Altman was pretty blunt, saying current use of AI “is not the sci-fi dream of what AI could do to enable you in all the ways that I think the models are capable of.”So, we know it’s not a smartphone. Altman’s also put the kibosh on it being a pair of glasses. And Jony Ive, well, he’s apparently not rushing to make another wearable, which makes sense given his design ethos.The good news for the impatient among usis that this isn’t just vapourware. Ive’s team has an actual prototype. Altman’s even taken one home to “live with it”. As for when we might get our hands on one? Altman’s reportedly aiming for a late 2026 release.Naturally, OpenAI is keeping the actual device under wraps, but you can always count on supply chain whispers for a few clues. The ever-reliableApple supply chain analyst Ming-Chi Kuo has thrown a few alleged design details into the ring via social media.Kuo reckons it’ll be “slightly larger” than the Humane AI Pin, but that it will look “as compact and elegant as an iPod Shuffle.” And yes, like the Shuffle, Kuo says no screen.According to Kuo, the device will chat with your phone and computer instead, using good old-fashioned microphones for your voice and cameras to see what’s going on around you. Interestingly, he suggests it’ll be worn around the neck, necklace-style, rather than clipped on like the AI Pin.Kuo’s crystal ball points to mass production in 2027, but he wisely adds a pinch of salt, noting the final look and feel could still change.So, the billion-dollarquestion remains: will this OpenAI device be the next big thing, the gamechanger we’ve been waiting for? Or will it be another noble-but-failed attempt to break free from the smartphone’s iron grip, joining the likes of the AI Pin in the ‘great ideas that didn’t quite make it’ pile?Altman, for one, is brimming with confidence. Having lived with the prototype, he’s gone on record saying he believes it will be “the coolest piece of technology that the world will have ever seen.”Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #details #leak #jony #ives #ambitious
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Details leak of Jony Ive’s ambitious OpenAI device
    After what felt like an age of tech industry tea-leaf reading, OpenAI has officially snapped up “io,” the much-buzzed-about startup building an AI device from former Apple design guru Jony Ive and OpenAI’s chief, Sam Altman. The price tag? $6.5 billion.OpenAI put out a video this week talking about the Ive and Altman venture in a general sort of way, but now, a few more tidbits about what they’re actually cooking have slipped out.And what are they planning with all that cash and brainpower? Well, the eagle-eyed folks at The Washington Post spotted an internal chat between Sam Altman and OpenAI staff where he set a target of shipping 100 million AI “companions.”Altman allegedly even told his team the OpenAI device is “the chance to do the biggest thing we’ve ever done as a company here.”To be clear, Altman has set that 100 million number as an eventual target. “We’re not going to ship 100 million devices literally on day one,” he said. But then, in a flex that’s pure Silicon Valley, he added they’d hit that 100 million mark “faster than any company has ever shipped 100 million of something new before.”So, what is this mysterious “companion”? The gadget is designed to be entirely aware of a user’s surroundings, and even their “life.” While they’ve mostly talked about a single device, Altman did let slip it might be more of a “family of devices.”Jony Ive, as expected, dubbed it “a new design movement.” You can almost hear the minimalist manifesto being drafted.Why the full-blown acquisition, though? Weren’t they just going to partner up? Originally, yes. The plan was for Ive’s startup to cook up the hardware and sell it, with OpenAI delivering the brains. But it seems the vision got bigger. This isn’t just another accessory, you see.Altman stressed the device will be a “central facet of using OpenAI.” He even said, “We both got excited about the idea that, if you subscribed to ChatGPT, we should just mail you new computers, and you should use those.”Frankly, they reckon our current tech – our trusty laptops, the websites we browse – just isn’t up to snuff for the kind of AI experiences they’re dreaming of. Altman was pretty blunt, saying current use of AI “is not the sci-fi dream of what AI could do to enable you in all the ways that I think the models are capable of.”So, we know it’s not a smartphone. Altman’s also put the kibosh on it being a pair of glasses. And Jony Ive, well, he’s apparently not rushing to make another wearable, which makes sense given his design ethos.The good news for the impatient among us (i.e., everyone in tech) is that this isn’t just vapourware. Ive’s team has an actual prototype. Altman’s even taken one home to “live with it”. As for when we might get our hands on one? Altman’s reportedly aiming for a late 2026 release.Naturally, OpenAI is keeping the actual device under wraps, but you can always count on supply chain whispers for a few clues. The ever-reliable (well, usually!) Apple supply chain analyst Ming-Chi Kuo has thrown a few alleged design details into the ring via social media.Kuo reckons it’ll be “slightly larger” than the Humane AI Pin, but that it will look “as compact and elegant as an iPod Shuffle.” And yes, like the Shuffle, Kuo says no screen.According to Kuo, the device will chat with your phone and computer instead, using good old-fashioned microphones for your voice and cameras to see what’s going on around you. Interestingly, he suggests it’ll be worn around the neck, necklace-style, rather than clipped on like the AI Pin.Kuo’s crystal ball points to mass production in 2027, but he wisely adds a pinch of salt, noting the final look and feel could still change.So, the billion-dollar (well, £5.1 billion) question remains: will this OpenAI device be the next big thing, the gamechanger we’ve been waiting for? Or will it be another noble-but-failed attempt to break free from the smartphone’s iron grip, joining the likes of the AI Pin in the ‘great ideas that didn’t quite make it’ pile?Altman, for one, is brimming with confidence. Having lived with the prototype, he’s gone on record saying he believes it will be “the coolest piece of technology that the world will have ever seen.”Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Anthropic Claude 4: A new era for intelligent agents and AI coding

    Anthropic has unveiled its latest Claude 4 model family, and it’s looking like a leap for anyone building next-gen AI assistants or coding. The stars of the show are Claude Opus 4, the new powerhouse, and Claude Sonnet 4, designed to be a smart all-rounder.Anthropic isn’t shy about its ambitions, stating these models are geared to “advance our customers’ AI strategies across the board.” They’re positioning Opus 4 as the tool to “push boundaries in coding, research, writing, and scientific discovery,” while Sonnet 4 is billed as an “instant upgrade from Sonnet 3.7,” ready to bring “frontier performance to everyday use cases.”Claude Opus 4: The new coding champWhen Anthropic calls Claude Opus 4 its “most powerful model yet and the best coding model in the world,” you sit up and take notice. And they’ve got the numbers to back it up, with Opus 4 topping the charts on crucial industry tests, hitting 72.5% on SWE-bench and 43.2% on Terminal-bench.But it’s not just about quick sprints. Opus 4 is built for the long haul, designed for “sustained performance on long-running tasks that require focused effort and thousands of steps.” Imagine an AI that can “work continuously for several hours”—that’s what Anthropic claims.This should be a massive step up from previous Sonnet models and could expand what AI agents can achieve, tackling problems that require real persistence.Claude Sonnet 4: For daily AI and agentic workWhile Opus 4 is the heavyweight champion, Claude Sonnet 4 is shaping up to be the versatile workhorse, promising a significant boost for a huge range of applications. Early feedback from those who’ve had a sneak peek is glowing.For instance, GitHub “says Claude Sonnet 4 soars in agentic scenarios” and is so impressed they “plan to introduce it as the base model for the new coding agent in GitHub Copilot.” That’s a hefty endorsement. Tech commentator Manus is also impressed, highlighting its “improvements in following complex instructions, clear reasoning, and aesthetic outputs.”The positive vibes continue with iGent, which “reports Sonnet 4 excels at autonomous multi-feature app development, as well as substantially improved problem-solving and codebase navigation—reducing navigation errors from 20% to near zero.” That’s a game-changer for development workflows. Sourcegraph is equally optimistic, seeing the model as a “substantial leap in software development—staying on track longer, understanding problems more deeply, and providing more elegant code quality.”Augment Code has seen “higher success rates, more surgical code edits, and more careful work through complex tasks,” leading them to make Sonnet 4 their “top choice for their primary model.”Hybrid modes and developer delightsOne of the really clever bits about the Claude 4 family is its hybrid nature. Both Opus 4 and Sonnet 4 can operate in two gears: one for those near-instant replies we often need, and another that allows for “extended thinking for deeper reasoning.”This deeper thinking mode is part of the Pro, Max, Team, and Enterprise Claude plans. Good news for everyone, though – Sonnet 4, complete with this extended thinking, will also be available to free users, which is a fantastic move for making top-tier AI more accessible.Anthropic is also rolling out some tasty new tools for developers on its API, clearly aiming to supercharge the creation of more sophisticated AI agents:Code execution tool: This lets models actually run code, opening up all sorts of possibilities for interactive and problem-solving applications.MCP connector: Introduced by Anthropic, MCP standardises context exchange between AI assistants and software environments.Files API: This will make it much easier for AI to work directly with files, which is a big deal for many real-world tasks.Prompt caching: Developers will be able to cache prompts for up to an hour. This might sound small, but it can make a real difference to speed and efficiency, especially for frequently used queries.Leading the pack in real-world performanceAnthropic is keen to emphasise that its “Claude 4 models lead on SWE-bench Verified, a benchmark for performance on real software engineering tasks.” Beyond coding, they stress that these models “deliver strong performance across coding, reasoning, multimodal capabilities, and agentic tasks.”Despite the leaps in capability, Anthropic is holding the line on pricing. Claude Opus 4 will set you back per million input tokens and per million output tokens. Claude Sonnet 4, the more accessible option, is priced at per million input tokens and per million output tokens. This consistency will be welcomed by existing users.Both Claude Opus 4 and Sonnet 4 are ready to go via the Anthropic API, and they’re also popping up on Amazon Bedrock and Google Cloud’s Vertex AI. This broad availability means businesses and developers worldwide can start experimenting and integrating these new tools fairly easily.Anthropic is clearly doubling down on making AI more capable, particularly in the complex realms of coding and autonomous agent behaviour. With these new models and developer tools, the potential for innovation just got a serious boost.See also: Details leak of Jony Ive’s ambitious OpenAI deviceWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #anthropic #claude #new #era #intelligent
    Anthropic Claude 4: A new era for intelligent agents and AI coding
    Anthropic has unveiled its latest Claude 4 model family, and it’s looking like a leap for anyone building next-gen AI assistants or coding. The stars of the show are Claude Opus 4, the new powerhouse, and Claude Sonnet 4, designed to be a smart all-rounder.Anthropic isn’t shy about its ambitions, stating these models are geared to “advance our customers’ AI strategies across the board.” They’re positioning Opus 4 as the tool to “push boundaries in coding, research, writing, and scientific discovery,” while Sonnet 4 is billed as an “instant upgrade from Sonnet 3.7,” ready to bring “frontier performance to everyday use cases.”Claude Opus 4: The new coding champWhen Anthropic calls Claude Opus 4 its “most powerful model yet and the best coding model in the world,” you sit up and take notice. And they’ve got the numbers to back it up, with Opus 4 topping the charts on crucial industry tests, hitting 72.5% on SWE-bench and 43.2% on Terminal-bench.But it’s not just about quick sprints. Opus 4 is built for the long haul, designed for “sustained performance on long-running tasks that require focused effort and thousands of steps.” Imagine an AI that can “work continuously for several hours”—that’s what Anthropic claims.This should be a massive step up from previous Sonnet models and could expand what AI agents can achieve, tackling problems that require real persistence.Claude Sonnet 4: For daily AI and agentic workWhile Opus 4 is the heavyweight champion, Claude Sonnet 4 is shaping up to be the versatile workhorse, promising a significant boost for a huge range of applications. Early feedback from those who’ve had a sneak peek is glowing.For instance, GitHub “says Claude Sonnet 4 soars in agentic scenarios” and is so impressed they “plan to introduce it as the base model for the new coding agent in GitHub Copilot.” That’s a hefty endorsement. Tech commentator Manus is also impressed, highlighting its “improvements in following complex instructions, clear reasoning, and aesthetic outputs.”The positive vibes continue with iGent, which “reports Sonnet 4 excels at autonomous multi-feature app development, as well as substantially improved problem-solving and codebase navigation—reducing navigation errors from 20% to near zero.” That’s a game-changer for development workflows. Sourcegraph is equally optimistic, seeing the model as a “substantial leap in software development—staying on track longer, understanding problems more deeply, and providing more elegant code quality.”Augment Code has seen “higher success rates, more surgical code edits, and more careful work through complex tasks,” leading them to make Sonnet 4 their “top choice for their primary model.”Hybrid modes and developer delightsOne of the really clever bits about the Claude 4 family is its hybrid nature. Both Opus 4 and Sonnet 4 can operate in two gears: one for those near-instant replies we often need, and another that allows for “extended thinking for deeper reasoning.”This deeper thinking mode is part of the Pro, Max, Team, and Enterprise Claude plans. Good news for everyone, though – Sonnet 4, complete with this extended thinking, will also be available to free users, which is a fantastic move for making top-tier AI more accessible.Anthropic is also rolling out some tasty new tools for developers on its API, clearly aiming to supercharge the creation of more sophisticated AI agents:Code execution tool: This lets models actually run code, opening up all sorts of possibilities for interactive and problem-solving applications.MCP connector: Introduced by Anthropic, MCP standardises context exchange between AI assistants and software environments.Files API: This will make it much easier for AI to work directly with files, which is a big deal for many real-world tasks.Prompt caching: Developers will be able to cache prompts for up to an hour. This might sound small, but it can make a real difference to speed and efficiency, especially for frequently used queries.Leading the pack in real-world performanceAnthropic is keen to emphasise that its “Claude 4 models lead on SWE-bench Verified, a benchmark for performance on real software engineering tasks.” Beyond coding, they stress that these models “deliver strong performance across coding, reasoning, multimodal capabilities, and agentic tasks.”Despite the leaps in capability, Anthropic is holding the line on pricing. Claude Opus 4 will set you back per million input tokens and per million output tokens. Claude Sonnet 4, the more accessible option, is priced at per million input tokens and per million output tokens. This consistency will be welcomed by existing users.Both Claude Opus 4 and Sonnet 4 are ready to go via the Anthropic API, and they’re also popping up on Amazon Bedrock and Google Cloud’s Vertex AI. This broad availability means businesses and developers worldwide can start experimenting and integrating these new tools fairly easily.Anthropic is clearly doubling down on making AI more capable, particularly in the complex realms of coding and autonomous agent behaviour. With these new models and developer tools, the potential for innovation just got a serious boost.See also: Details leak of Jony Ive’s ambitious OpenAI deviceWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #anthropic #claude #new #era #intelligent
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Anthropic Claude 4: A new era for intelligent agents and AI coding
    Anthropic has unveiled its latest Claude 4 model family, and it’s looking like a leap for anyone building next-gen AI assistants or coding. The stars of the show are Claude Opus 4, the new powerhouse, and Claude Sonnet 4, designed to be a smart all-rounder.Anthropic isn’t shy about its ambitions, stating these models are geared to “advance our customers’ AI strategies across the board.” They’re positioning Opus 4 as the tool to “push boundaries in coding, research, writing, and scientific discovery,” while Sonnet 4 is billed as an “instant upgrade from Sonnet 3.7,” ready to bring “frontier performance to everyday use cases.”Claude Opus 4: The new coding champWhen Anthropic calls Claude Opus 4 its “most powerful model yet and the best coding model in the world,” you sit up and take notice. And they’ve got the numbers to back it up, with Opus 4 topping the charts on crucial industry tests, hitting 72.5% on SWE-bench and 43.2% on Terminal-bench.But it’s not just about quick sprints. Opus 4 is built for the long haul, designed for “sustained performance on long-running tasks that require focused effort and thousands of steps.” Imagine an AI that can “work continuously for several hours”—that’s what Anthropic claims.This should be a massive step up from previous Sonnet models and could expand what AI agents can achieve, tackling problems that require real persistence.Claude Sonnet 4: For daily AI and agentic workWhile Opus 4 is the heavyweight champion, Claude Sonnet 4 is shaping up to be the versatile workhorse, promising a significant boost for a huge range of applications. Early feedback from those who’ve had a sneak peek is glowing.For instance, GitHub “says Claude Sonnet 4 soars in agentic scenarios” and is so impressed they “plan to introduce it as the base model for the new coding agent in GitHub Copilot.” That’s a hefty endorsement. Tech commentator Manus is also impressed, highlighting its “improvements in following complex instructions, clear reasoning, and aesthetic outputs.”The positive vibes continue with iGent, which “reports Sonnet 4 excels at autonomous multi-feature app development, as well as substantially improved problem-solving and codebase navigation—reducing navigation errors from 20% to near zero.” That’s a game-changer for development workflows. Sourcegraph is equally optimistic, seeing the model as a “substantial leap in software development—staying on track longer, understanding problems more deeply, and providing more elegant code quality.”Augment Code has seen “higher success rates, more surgical code edits, and more careful work through complex tasks,” leading them to make Sonnet 4 their “top choice for their primary model.”Hybrid modes and developer delightsOne of the really clever bits about the Claude 4 family is its hybrid nature. Both Opus 4 and Sonnet 4 can operate in two gears: one for those near-instant replies we often need, and another that allows for “extended thinking for deeper reasoning.”This deeper thinking mode is part of the Pro, Max, Team, and Enterprise Claude plans. Good news for everyone, though – Sonnet 4, complete with this extended thinking, will also be available to free users, which is a fantastic move for making top-tier AI more accessible.Anthropic is also rolling out some tasty new tools for developers on its API, clearly aiming to supercharge the creation of more sophisticated AI agents:Code execution tool: This lets models actually run code, opening up all sorts of possibilities for interactive and problem-solving applications.MCP connector: Introduced by Anthropic, MCP standardises context exchange between AI assistants and software environments.Files API: This will make it much easier for AI to work directly with files, which is a big deal for many real-world tasks.Prompt caching: Developers will be able to cache prompts for up to an hour. This might sound small, but it can make a real difference to speed and efficiency, especially for frequently used queries.Leading the pack in real-world performanceAnthropic is keen to emphasise that its “Claude 4 models lead on SWE-bench Verified, a benchmark for performance on real software engineering tasks.” Beyond coding, they stress that these models “deliver strong performance across coding, reasoning, multimodal capabilities, and agentic tasks.”Despite the leaps in capability, Anthropic is holding the line on pricing. Claude Opus 4 will set you back $15 per million input tokens and $75 per million output tokens. Claude Sonnet 4, the more accessible option, is priced at $3 per million input tokens and $15 per million output tokens. This consistency will be welcomed by existing users.Both Claude Opus 4 and Sonnet 4 are ready to go via the Anthropic API, and they’re also popping up on Amazon Bedrock and Google Cloud’s Vertex AI. This broad availability means businesses and developers worldwide can start experimenting and integrating these new tools fairly easily.Anthropic is clearly doubling down on making AI more capable, particularly in the complex realms of coding and autonomous agent behaviour. With these new models and developer tools, the potential for innovation just got a serious boost.(Image credit: Anthropic)See also: Details leak of Jony Ive’s ambitious OpenAI deviceWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Why the Middle East is a hot place for global tech investments

    The Middle East is pulling in more attention from global tech investors than ever. Saudi Arabia, the UAE, and Qatar are rolling out billions of dollars in deals, working with top US companies, and building the kind of infrastructure needed to run large-scale AI systems.It’s not just about the money. There are new laws, startup activity, and plans for growth that are turning heads in Silicon Valley and beyond.Strategic deals anchor US tech tiesUS President Donald Trump recently visited the region and announced more than trillion in investment agreements. These included major partnerships between Gulf states and American firms in artificial intelligence, cloud services, and defence tech.The UAE said it would build one of the world’s largest AI campuses in Abu Dhabi. At the same time, Saudi Arabia launched an AI company called Humain. Backed by the Public Investment Fund, the firm has already formed deals with Nvidia and AMD to bring in thousands of chips for local use. The idea is to run and train AI models inside the kingdom, cutting the need to rely on overseas services.These deals aren’t one-off events. They point to deeper ties between the Gulf and US tech companies. Gulf leaders want to localise AI development, but US companies see the region as a growing market for cloud, data, and chips. This growing alignment offers both sides an edge in a global race where speed and access matter.Gulf states scale up AI infrastructureAI systems need strong computing power. That means data centres, chips, and networks that can handle constant, heavy demand. Countries like Saudi Arabia and the UAE are putting their money behind this need.Saudi Arabia’s Humain is planning to deploy over 18,000 Nvidia chips, some of the most advanced in the market. These will power training clusters that let researchers and firms build new models at home. The UAE, through partnerships with Amazon and OpenAI, is also expanding its local data capacity. One campus in Abu Dhabi will include large-scale AI labs and supercomputers.Running powerful AI models close to home offers more than speed. It helps with data control, lowers costs, and reduces delays. Governments in the region are aware that long-term control over AI infrastructure will play a major role in future national development and influence.These projects are part of each country’s national tech strategy. Saudi Arabia’s Vision 2030 includes tech among its focus areas. The UAE’s AI strategy aims to be one of the top AI-ready countries in the next five years.Startups are finding momentumInvestment isn’t only flowing to big infrastructure. April 2025 saw MENA startups raise million. That’s more than double what they raised in March. Fintech and B2B platforms are leading the charge.Thndr, a Cairo-based investment platform, raised million to expand into Saudi Arabia and the UAE. These countries have growing retail investor bases and are looking for tools that make trading and saving more accessible.The Gulf’s young, tech-savvy population and high mobile use make it an ideal testbed for startups. At the same time, government-backed funds are investing in early-stage companies to help grow local talent and reduce dependence on imported services.Governments are also creating more startup-friendly zones. Free economic zones in the UAE and planned innovation hubs in Saudi Arabia offer tax benefits and simplified licensing for tech ventures. Investors say that regulatory support is improving, and founders now have clearer paths to launch and scale.Cloud and data centre expansion gathers paceCloud service demand is rising across the Middle East. Smart city projects, e-government platforms, and AI applications are driving the need for secure, local data storage and processing.Oracle has pledged billion to expand its cloud footprint in Saudi Arabia. Google, AWS, and Microsoft are also investing in regional data hubs. These centres will support everything from banking to logistics.Building out cloud services is key to keeping data local and speeding up online services. It also lowers costs for local firms, which no longer need to rely on foreign servers. The result is a growing tech sector that has the tools to serve customers in real time.Large-scale data operations also open the door for more regional SaaS companies. With cloud capacity in place, local developers can create enterprise tools, AI services, and e-commerce platforms tailored to local needs.Policy reforms drive diversificationBehind these tech moves are changes in policy. Governments are cutting red tape, easing rules for foreign ownership, and offering tax breaks for tech investors. The aim is to reduce the region’s reliance on oil and build a broader economic base.Saudi Arabia’s Vision 2030 includes goals for digital infrastructure, education, and innovation. The UAE’s AI strategy is tied to its push to attract top researchers and engineers. These are not just plans on paper. They’re being matched with funding, laws, and global partnerships.There is also a cultural shift underway. Tech is being taught in schools, and universities are opening AI-focused programs. This is helping to build a future workforce that can support local companies and attract international firms.More investors are noting the predictability and speed of doing business. This is especially important for tech startups that need fast feedback and steady support to grow. When rules are clear and approvals are quick, companies are more likely to stay.Balancing growth and geopolitical interestsWith more tech investment comes more attention. The US sees the region as a way to grow its global tech influence, especially as ties with China remain tense. For Middle Eastern nations, working with US companies gives them access to know-how and supply chains that would take years to build from scratch.At the same time, there are concerns about who controls the tech, where data is stored, and how it’s used. Some countries are pushing for data rules that favour local storage. Others want to develop their own large language models and keep training data inside national borders.Some regional leaders are starting to speak more openly about digital independence. They want to be buyers, yes, but also builders. That means investing in chips, software, and talent that can support homegrown tech. A few years ago, that seemed far off. Now, with the right backing, it’s starting to look within reach.Navigating these issues will shape the next phase of tech growth in the Middle East. Governments want to move fast but also retain control over key parts of their digital economy.The Middle East’s role in global tech is shifting. It’s no longer just a market for new gadgets or services. It’s becoming a centre for infrastructure, AI training, startup growth, and cloud services. Countries in the region are investing with a clear goal: to build long-term strength in a sector that shapes how business, education, and even government will work in the years ahead.If current trends continue, the Middle East won’t just be receiving tech. It will be helping shape it.See also: Saudi Arabia moves to build its AI future with HUMAIN and NVIDIAWant to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #why #middle #east #hot #place
    Why the Middle East is a hot place for global tech investments
    The Middle East is pulling in more attention from global tech investors than ever. Saudi Arabia, the UAE, and Qatar are rolling out billions of dollars in deals, working with top US companies, and building the kind of infrastructure needed to run large-scale AI systems.It’s not just about the money. There are new laws, startup activity, and plans for growth that are turning heads in Silicon Valley and beyond.Strategic deals anchor US tech tiesUS President Donald Trump recently visited the region and announced more than trillion in investment agreements. These included major partnerships between Gulf states and American firms in artificial intelligence, cloud services, and defence tech.The UAE said it would build one of the world’s largest AI campuses in Abu Dhabi. At the same time, Saudi Arabia launched an AI company called Humain. Backed by the Public Investment Fund, the firm has already formed deals with Nvidia and AMD to bring in thousands of chips for local use. The idea is to run and train AI models inside the kingdom, cutting the need to rely on overseas services.These deals aren’t one-off events. They point to deeper ties between the Gulf and US tech companies. Gulf leaders want to localise AI development, but US companies see the region as a growing market for cloud, data, and chips. This growing alignment offers both sides an edge in a global race where speed and access matter.Gulf states scale up AI infrastructureAI systems need strong computing power. That means data centres, chips, and networks that can handle constant, heavy demand. Countries like Saudi Arabia and the UAE are putting their money behind this need.Saudi Arabia’s Humain is planning to deploy over 18,000 Nvidia chips, some of the most advanced in the market. These will power training clusters that let researchers and firms build new models at home. The UAE, through partnerships with Amazon and OpenAI, is also expanding its local data capacity. One campus in Abu Dhabi will include large-scale AI labs and supercomputers.Running powerful AI models close to home offers more than speed. It helps with data control, lowers costs, and reduces delays. Governments in the region are aware that long-term control over AI infrastructure will play a major role in future national development and influence.These projects are part of each country’s national tech strategy. Saudi Arabia’s Vision 2030 includes tech among its focus areas. The UAE’s AI strategy aims to be one of the top AI-ready countries in the next five years.Startups are finding momentumInvestment isn’t only flowing to big infrastructure. April 2025 saw MENA startups raise million. That’s more than double what they raised in March. Fintech and B2B platforms are leading the charge.Thndr, a Cairo-based investment platform, raised million to expand into Saudi Arabia and the UAE. These countries have growing retail investor bases and are looking for tools that make trading and saving more accessible.The Gulf’s young, tech-savvy population and high mobile use make it an ideal testbed for startups. At the same time, government-backed funds are investing in early-stage companies to help grow local talent and reduce dependence on imported services.Governments are also creating more startup-friendly zones. Free economic zones in the UAE and planned innovation hubs in Saudi Arabia offer tax benefits and simplified licensing for tech ventures. Investors say that regulatory support is improving, and founders now have clearer paths to launch and scale.Cloud and data centre expansion gathers paceCloud service demand is rising across the Middle East. Smart city projects, e-government platforms, and AI applications are driving the need for secure, local data storage and processing.Oracle has pledged billion to expand its cloud footprint in Saudi Arabia. Google, AWS, and Microsoft are also investing in regional data hubs. These centres will support everything from banking to logistics.Building out cloud services is key to keeping data local and speeding up online services. It also lowers costs for local firms, which no longer need to rely on foreign servers. The result is a growing tech sector that has the tools to serve customers in real time.Large-scale data operations also open the door for more regional SaaS companies. With cloud capacity in place, local developers can create enterprise tools, AI services, and e-commerce platforms tailored to local needs.Policy reforms drive diversificationBehind these tech moves are changes in policy. Governments are cutting red tape, easing rules for foreign ownership, and offering tax breaks for tech investors. The aim is to reduce the region’s reliance on oil and build a broader economic base.Saudi Arabia’s Vision 2030 includes goals for digital infrastructure, education, and innovation. The UAE’s AI strategy is tied to its push to attract top researchers and engineers. These are not just plans on paper. They’re being matched with funding, laws, and global partnerships.There is also a cultural shift underway. Tech is being taught in schools, and universities are opening AI-focused programs. This is helping to build a future workforce that can support local companies and attract international firms.More investors are noting the predictability and speed of doing business. This is especially important for tech startups that need fast feedback and steady support to grow. When rules are clear and approvals are quick, companies are more likely to stay.Balancing growth and geopolitical interestsWith more tech investment comes more attention. The US sees the region as a way to grow its global tech influence, especially as ties with China remain tense. For Middle Eastern nations, working with US companies gives them access to know-how and supply chains that would take years to build from scratch.At the same time, there are concerns about who controls the tech, where data is stored, and how it’s used. Some countries are pushing for data rules that favour local storage. Others want to develop their own large language models and keep training data inside national borders.Some regional leaders are starting to speak more openly about digital independence. They want to be buyers, yes, but also builders. That means investing in chips, software, and talent that can support homegrown tech. A few years ago, that seemed far off. Now, with the right backing, it’s starting to look within reach.Navigating these issues will shape the next phase of tech growth in the Middle East. Governments want to move fast but also retain control over key parts of their digital economy.The Middle East’s role in global tech is shifting. It’s no longer just a market for new gadgets or services. It’s becoming a centre for infrastructure, AI training, startup growth, and cloud services. Countries in the region are investing with a clear goal: to build long-term strength in a sector that shapes how business, education, and even government will work in the years ahead.If current trends continue, the Middle East won’t just be receiving tech. It will be helping shape it.See also: Saudi Arabia moves to build its AI future with HUMAIN and NVIDIAWant to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #why #middle #east #hot #place
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Why the Middle East is a hot place for global tech investments
    The Middle East is pulling in more attention from global tech investors than ever. Saudi Arabia, the UAE, and Qatar are rolling out billions of dollars in deals, working with top US companies, and building the kind of infrastructure needed to run large-scale AI systems.It’s not just about the money. There are new laws, startup activity, and plans for growth that are turning heads in Silicon Valley and beyond.Strategic deals anchor US tech tiesUS President Donald Trump recently visited the region and announced more than $2 trillion in investment agreements. These included major partnerships between Gulf states and American firms in artificial intelligence, cloud services, and defence tech.The UAE said it would build one of the world’s largest AI campuses in Abu Dhabi. At the same time, Saudi Arabia launched an AI company called Humain. Backed by the Public Investment Fund, the firm has already formed deals with Nvidia and AMD to bring in thousands of chips for local use. The idea is to run and train AI models inside the kingdom, cutting the need to rely on overseas services.These deals aren’t one-off events. They point to deeper ties between the Gulf and US tech companies. Gulf leaders want to localise AI development, but US companies see the region as a growing market for cloud, data, and chips. This growing alignment offers both sides an edge in a global race where speed and access matter.Gulf states scale up AI infrastructureAI systems need strong computing power. That means data centres, chips, and networks that can handle constant, heavy demand. Countries like Saudi Arabia and the UAE are putting their money behind this need.Saudi Arabia’s Humain is planning to deploy over 18,000 Nvidia chips, some of the most advanced in the market. These will power training clusters that let researchers and firms build new models at home. The UAE, through partnerships with Amazon and OpenAI, is also expanding its local data capacity. One campus in Abu Dhabi will include large-scale AI labs and supercomputers.Running powerful AI models close to home offers more than speed. It helps with data control, lowers costs, and reduces delays. Governments in the region are aware that long-term control over AI infrastructure will play a major role in future national development and influence.These projects are part of each country’s national tech strategy. Saudi Arabia’s Vision 2030 includes tech among its focus areas. The UAE’s AI strategy aims to be one of the top AI-ready countries in the next five years.Startups are finding momentumInvestment isn’t only flowing to big infrastructure. April 2025 saw MENA startups raise $228.4 million. That’s more than double what they raised in March. Fintech and B2B platforms are leading the charge.Thndr, a Cairo-based investment platform, raised $15.7 million to expand into Saudi Arabia and the UAE. These countries have growing retail investor bases and are looking for tools that make trading and saving more accessible.The Gulf’s young, tech-savvy population and high mobile use make it an ideal testbed for startups. At the same time, government-backed funds are investing in early-stage companies to help grow local talent and reduce dependence on imported services.Governments are also creating more startup-friendly zones. Free economic zones in the UAE and planned innovation hubs in Saudi Arabia offer tax benefits and simplified licensing for tech ventures. Investors say that regulatory support is improving, and founders now have clearer paths to launch and scale.Cloud and data centre expansion gathers paceCloud service demand is rising across the Middle East. Smart city projects, e-government platforms, and AI applications are driving the need for secure, local data storage and processing.Oracle has pledged $14 billion to expand its cloud footprint in Saudi Arabia. Google, AWS, and Microsoft are also investing in regional data hubs. These centres will support everything from banking to logistics.Building out cloud services is key to keeping data local and speeding up online services. It also lowers costs for local firms, which no longer need to rely on foreign servers. The result is a growing tech sector that has the tools to serve customers in real time.Large-scale data operations also open the door for more regional SaaS companies. With cloud capacity in place, local developers can create enterprise tools, AI services, and e-commerce platforms tailored to local needs.Policy reforms drive diversificationBehind these tech moves are changes in policy. Governments are cutting red tape, easing rules for foreign ownership, and offering tax breaks for tech investors. The aim is to reduce the region’s reliance on oil and build a broader economic base.Saudi Arabia’s Vision 2030 includes goals for digital infrastructure, education, and innovation. The UAE’s AI strategy is tied to its push to attract top researchers and engineers. These are not just plans on paper. They’re being matched with funding, laws, and global partnerships.There is also a cultural shift underway. Tech is being taught in schools, and universities are opening AI-focused programs. This is helping to build a future workforce that can support local companies and attract international firms.More investors are noting the predictability and speed of doing business. This is especially important for tech startups that need fast feedback and steady support to grow. When rules are clear and approvals are quick, companies are more likely to stay.Balancing growth and geopolitical interestsWith more tech investment comes more attention. The US sees the region as a way to grow its global tech influence, especially as ties with China remain tense. For Middle Eastern nations, working with US companies gives them access to know-how and supply chains that would take years to build from scratch.At the same time, there are concerns about who controls the tech, where data is stored, and how it’s used. Some countries are pushing for data rules that favour local storage. Others want to develop their own large language models and keep training data inside national borders.Some regional leaders are starting to speak more openly about digital independence. They want to be buyers, yes, but also builders. That means investing in chips, software, and talent that can support homegrown tech. A few years ago, that seemed far off. Now, with the right backing, it’s starting to look within reach.Navigating these issues will shape the next phase of tech growth in the Middle East. Governments want to move fast but also retain control over key parts of their digital economy.The Middle East’s role in global tech is shifting. It’s no longer just a market for new gadgets or services. It’s becoming a centre for infrastructure, AI training, startup growth, and cloud services. Countries in the region are investing with a clear goal: to build long-term strength in a sector that shapes how business, education, and even government will work in the years ahead.If current trends continue, the Middle East won’t just be receiving tech. It will be helping shape it.(Photo by Unsplash)See also: Saudi Arabia moves to build its AI future with HUMAIN and NVIDIAWant to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Linux Foundation: Slash costs, boost growth with open-source AI

    The Linux Foundation and Meta are putting some numbers behind how open-source AIis driving innovation and adoption.The adoption of AI tools is pretty much everywhere now, with 94% of organisations surveyed already using them. And get this: within that crowd, 89% are tapping into open-source AI for some part of their tech backbone.A paper released this week by Meta and the Linux Foundation stitches together academic brainpower, industry frontline stories, and global survey data to showcase an ecosystem that’s buzzing thanks to being open and affordable.If there’s one thing that jumps off the page, it’s the money talk. Cost savings, folks, are a huge deal here. Unsurprisingly, two-thirds of businesses are saying that open source AI is just plain cheaper to get up and running compared to proprietary. So, it’s no shocker that almost half of them point to these savings as a big reason for going the open-source route.We’re not talking about trimming a few coins here and there. Researchers reckon companies would be shelling out 3.5 times more cash if open-source software simply vanished. As AI digs its heels deeper into everything we do, the financial muscle of open-source is only going to get stronger, potentially even overshadowing traditional open-source software’s impact.But this isn’t just about pinching pennies; it’s about unleashing brains. The report points out that AI can slash business unit costs by over 50%, which, as you can imagine, opens the door for revenue boosts. When open AI models are out there for cheap, or even free, it levels the playing field. Suddenly, developers and businesses of all sizes can jump in, play around, and rethink how they do things.Often it’s the smaller players, the agile startups and medium-sized businesses, that are diving headfirst into open-source AI more so than the big corporate giants. And since these are often the places where groundbreaking ideas and new products are born, it really hammers home how vital OSAI is for keeping the innovation engine chugging and helping those plucky, cutting-edge firms compete.And if you want a textbook example of how going open can turbocharge things, look no further than PyTorch. The report digs into how Meta’s decision to shift its heavyweight deep learning framework to an open governance model, under a non-profit, turned out to be a masterstroke.The report leans on a close look by Yue and Nagle, who tracked what happened next. Once PyTorch flew the Meta nest, contributions from Meta itself “significantly decreased.” Sounds a bit off, right? But actually, it signalled a healthy move away from one company calling the shots.What really ramped up was input from “external companies, especially from the developers of complementary technology, such as chip manufacturers.” Meanwhile, the actual users, the developers building stuff with PyTorch, kept their engagement steady – “no change.”It’s a clear win. As the researchers put it, this kind of shift for major OSAI software “promotes broader participation and increased contributions and decreases the dominance of any single company.” It’s a powerful testament to what report authors Anna Hermansen and Cailean Osborne found: “engagement in open, collaborative activities is a better indicator of innovation than patents.”This isn’t just theory; it’s making waves in massive sectors. Take manufacturing. Open-source AI is set to be a game-changer there, mostly because its open code means you can bend it and shape it to fit. This flexibility allows AI to slot neatly into factory workflows, automating tasks and smoothing out order management. A 2023 McKinsey report, flagged in the study, even predicts AI could pump up to billion extra into advanced manufacturing.Then there’s healthcare. In places like hospitals and local clinics, where every penny and every minute counts, free and flexible tools like open-source AI can literally be lifesavers. Imagine AI helping with diagnoses or flagging diseases early.McKinsey thinks the global healthcare sector could see up to a billion boost in value once AI is really rolled out. A 2024 analysis even showed that open models in healthcare can go toe-to-toe with the proprietary ones—meaning hospitals can get tailored, privacy-friendly OSAI without skimping on performance.And it’s not just about the tech; it’s about the people. The report mentions that AI-related skills could see wages jump by up to 20%. That’s a big deal and really underlines why we need to be thinking about training and development for this new AI era.Hilary Carter, SVP of Research at The Linux Foundation, said: “The findings in this report make it clear: open-source AI is a catalyst for economic growth and opportunity. As adoption scales across sectors, we’re seeing measurable cost savings, increased productivity and rising demand for AI-related skills that can boost wages and career prospects.“Open-source AI is not only transforming how businesses operate—it’s reshaping how people work.”So, the takeaway? Open AI models are fast becoming the standard, the very foundation of future breakthroughs. They’re pushing growth and healthy competition by making powerful AI tools available without an eye-watering price tag.The Linux Foundation’s report isn’t just cheerleading; it’s laying out the hard numbers to show why open-source AI is absolutely crucial for a robust, stable, and forward-looking economy.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #linux #foundation #slash #costs #boost
    Linux Foundation: Slash costs, boost growth with open-source AI
    The Linux Foundation and Meta are putting some numbers behind how open-source AIis driving innovation and adoption.The adoption of AI tools is pretty much everywhere now, with 94% of organisations surveyed already using them. And get this: within that crowd, 89% are tapping into open-source AI for some part of their tech backbone.A paper released this week by Meta and the Linux Foundation stitches together academic brainpower, industry frontline stories, and global survey data to showcase an ecosystem that’s buzzing thanks to being open and affordable.If there’s one thing that jumps off the page, it’s the money talk. Cost savings, folks, are a huge deal here. Unsurprisingly, two-thirds of businesses are saying that open source AI is just plain cheaper to get up and running compared to proprietary. So, it’s no shocker that almost half of them point to these savings as a big reason for going the open-source route.We’re not talking about trimming a few coins here and there. Researchers reckon companies would be shelling out 3.5 times more cash if open-source software simply vanished. As AI digs its heels deeper into everything we do, the financial muscle of open-source is only going to get stronger, potentially even overshadowing traditional open-source software’s impact.But this isn’t just about pinching pennies; it’s about unleashing brains. The report points out that AI can slash business unit costs by over 50%, which, as you can imagine, opens the door for revenue boosts. When open AI models are out there for cheap, or even free, it levels the playing field. Suddenly, developers and businesses of all sizes can jump in, play around, and rethink how they do things.Often it’s the smaller players, the agile startups and medium-sized businesses, that are diving headfirst into open-source AI more so than the big corporate giants. And since these are often the places where groundbreaking ideas and new products are born, it really hammers home how vital OSAI is for keeping the innovation engine chugging and helping those plucky, cutting-edge firms compete.And if you want a textbook example of how going open can turbocharge things, look no further than PyTorch. The report digs into how Meta’s decision to shift its heavyweight deep learning framework to an open governance model, under a non-profit, turned out to be a masterstroke.The report leans on a close look by Yue and Nagle, who tracked what happened next. Once PyTorch flew the Meta nest, contributions from Meta itself “significantly decreased.” Sounds a bit off, right? But actually, it signalled a healthy move away from one company calling the shots.What really ramped up was input from “external companies, especially from the developers of complementary technology, such as chip manufacturers.” Meanwhile, the actual users, the developers building stuff with PyTorch, kept their engagement steady – “no change.”It’s a clear win. As the researchers put it, this kind of shift for major OSAI software “promotes broader participation and increased contributions and decreases the dominance of any single company.” It’s a powerful testament to what report authors Anna Hermansen and Cailean Osborne found: “engagement in open, collaborative activities is a better indicator of innovation than patents.”This isn’t just theory; it’s making waves in massive sectors. Take manufacturing. Open-source AI is set to be a game-changer there, mostly because its open code means you can bend it and shape it to fit. This flexibility allows AI to slot neatly into factory workflows, automating tasks and smoothing out order management. A 2023 McKinsey report, flagged in the study, even predicts AI could pump up to billion extra into advanced manufacturing.Then there’s healthcare. In places like hospitals and local clinics, where every penny and every minute counts, free and flexible tools like open-source AI can literally be lifesavers. Imagine AI helping with diagnoses or flagging diseases early.McKinsey thinks the global healthcare sector could see up to a billion boost in value once AI is really rolled out. A 2024 analysis even showed that open models in healthcare can go toe-to-toe with the proprietary ones—meaning hospitals can get tailored, privacy-friendly OSAI without skimping on performance.And it’s not just about the tech; it’s about the people. The report mentions that AI-related skills could see wages jump by up to 20%. That’s a big deal and really underlines why we need to be thinking about training and development for this new AI era.Hilary Carter, SVP of Research at The Linux Foundation, said: “The findings in this report make it clear: open-source AI is a catalyst for economic growth and opportunity. As adoption scales across sectors, we’re seeing measurable cost savings, increased productivity and rising demand for AI-related skills that can boost wages and career prospects.“Open-source AI is not only transforming how businesses operate—it’s reshaping how people work.”So, the takeaway? Open AI models are fast becoming the standard, the very foundation of future breakthroughs. They’re pushing growth and healthy competition by making powerful AI tools available without an eye-watering price tag.The Linux Foundation’s report isn’t just cheerleading; it’s laying out the hard numbers to show why open-source AI is absolutely crucial for a robust, stable, and forward-looking economy.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #linux #foundation #slash #costs #boost
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Linux Foundation: Slash costs, boost growth with open-source AI
    The Linux Foundation and Meta are putting some numbers behind how open-source AI (OSAI) is driving innovation and adoption.The adoption of AI tools is pretty much everywhere now, with 94% of organisations surveyed already using them. And get this: within that crowd, 89% are tapping into open-source AI for some part of their tech backbone.A paper released this week by Meta and the Linux Foundation stitches together academic brainpower, industry frontline stories, and global survey data to showcase an ecosystem that’s buzzing thanks to being open and affordable.If there’s one thing that jumps off the page, it’s the money talk. Cost savings, folks, are a huge deal here. Unsurprisingly, two-thirds of businesses are saying that open source AI is just plain cheaper to get up and running compared to proprietary. So, it’s no shocker that almost half of them point to these savings as a big reason for going the open-source route.We’re not talking about trimming a few coins here and there. Researchers reckon companies would be shelling out 3.5 times more cash if open-source software simply vanished. As AI digs its heels deeper into everything we do, the financial muscle of open-source is only going to get stronger, potentially even overshadowing traditional open-source software’s impact.But this isn’t just about pinching pennies; it’s about unleashing brains. The report points out that AI can slash business unit costs by over 50%, which, as you can imagine, opens the door for revenue boosts. When open AI models are out there for cheap, or even free, it levels the playing field. Suddenly, developers and businesses of all sizes can jump in, play around, and rethink how they do things.Often it’s the smaller players, the agile startups and medium-sized businesses, that are diving headfirst into open-source AI more so than the big corporate giants. And since these are often the places where groundbreaking ideas and new products are born, it really hammers home how vital OSAI is for keeping the innovation engine chugging and helping those plucky, cutting-edge firms compete.And if you want a textbook example of how going open can turbocharge things, look no further than PyTorch. The report digs into how Meta’s decision to shift its heavyweight deep learning framework to an open governance model, under a non-profit, turned out to be a masterstroke.The report leans on a close look by Yue and Nagle (2024), who tracked what happened next. Once PyTorch flew the Meta nest, contributions from Meta itself “significantly decreased.” Sounds a bit off, right? But actually, it signalled a healthy move away from one company calling the shots.What really ramped up was input from “external companies, especially from the developers of complementary technology, such as chip manufacturers.” Meanwhile, the actual users, the developers building stuff with PyTorch, kept their engagement steady – “no change.”It’s a clear win. As the researchers put it, this kind of shift for major OSAI software “promotes broader participation and increased contributions and decreases the dominance of any single company.” It’s a powerful testament to what report authors Anna Hermansen and Cailean Osborne found: “engagement in open, collaborative activities is a better indicator of innovation than patents.”This isn’t just theory; it’s making waves in massive sectors. Take manufacturing. Open-source AI is set to be a game-changer there, mostly because its open code means you can bend it and shape it to fit. This flexibility allows AI to slot neatly into factory workflows, automating tasks and smoothing out order management. A 2023 McKinsey report, flagged in the study, even predicts AI could pump up to $290 billion extra into advanced manufacturing.Then there’s healthcare. In places like hospitals and local clinics, where every penny and every minute counts, free and flexible tools like open-source AI can literally be lifesavers. Imagine AI helping with diagnoses or flagging diseases early.McKinsey thinks the global healthcare sector could see up to a $260 billion boost in value once AI is really rolled out. A 2024 analysis even showed that open models in healthcare can go toe-to-toe with the proprietary ones—meaning hospitals can get tailored, privacy-friendly OSAI without skimping on performance.And it’s not just about the tech; it’s about the people. The report mentions that AI-related skills could see wages jump by up to 20%. That’s a big deal and really underlines why we need to be thinking about training and development for this new AI era.Hilary Carter, SVP of Research at The Linux Foundation, said: “The findings in this report make it clear: open-source AI is a catalyst for economic growth and opportunity. As adoption scales across sectors, we’re seeing measurable cost savings, increased productivity and rising demand for AI-related skills that can boost wages and career prospects.“Open-source AI is not only transforming how businesses operate—it’s reshaping how people work.”So, the takeaway? Open AI models are fast becoming the standard, the very foundation of future breakthroughs. They’re pushing growth and healthy competition by making powerful AI tools available without an eye-watering price tag.The Linux Foundation’s report isn’t just cheerleading; it’s laying out the hard numbers to show why open-source AI is absolutely crucial for a robust, stable, and forward-looking economy.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Thales: AI and quantum threats top security agendas

    According to Thales, AI and quantum threats have catapulted to the top of the worry list for organisations wrestling with data security. That’s the key takeaway from the cybersecurity giant’s 2025 Data Threat Report, an annual deep dive into the latest data security threats, emerging trends, and hot topics.This year’s findings are stark: almost seven out of ten organisations now see the sheer speed of AI development – especially where generative AI is concerned – as the number one security headache related to its adoption. This anxiety isn’t just about pace; it’s also fed by concerns over a fundamental lack of integrity in AI systemsand a troubling deficit in trustworthiness.Generative AI is a data-hungry beast, relying heavily on high-quality, often sensitive, information for core functions like training models, making inferences, and, of course, generating content.As we make rapid advancements in “agentic AI” – systems that can act more autonomously – the pressure to ensure data quality is high calibre becomes even more critical. After all, sound decisionmaking and reliable actions from AI systems depend entirely on the data they’re fed.Many organisations are already diving in, with a third of respondents indicating generative AI is either being actively integrated or is already a force for transformation within their operations.Security threats increase as organisations embrace generative AIAs generative AI throws up a complex web of data security challenges while simultaneously offering strategic avenues to bolster defences, its growing integration signals a distinct shift. Businesses are moving beyond just dipping their toes in the AI water; they’re now looking at more mature, operational deployments.Interestingly, while most respondents tabbed the swift uptake of GenAI as their biggest security concern, those further along the AI adoption curve aren’t hitting the pause button to completely lock down their systems or fine-tune their tech stacks before forging ahead. This dash for rapid transformation – often overshadowing efforts to ensure organisational readiness – could mean these companies are, perhaps unwittingly, creating their own most serious security weak spots.Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research, said: “The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve.“Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.”On a more positive note, 73% of respondents report they are putting money into AI-specific security tools to counter threats, either through fresh budgets or by reshuffling existing resources. Those making AI security a priority are also diversifying their approaches: over two-thirds have sourced tools from their cloud providers, three in five are turning to established security vendors, and almost half are looking to new or emerging startups for solutions.What’s particularly telling is how quickly security for generative AI has climbed the spending charts, nabbing the second spot in ranked-choice voting, just pipped to the post by the perennial concern of cloud security. This shift powerfully underscores the growing recognition of AI-driven risks and the urgent need for specialised defences to counter them.Data breaches show modest decline, though threats remain elevatedWhile the nightmare of a data breach still looms large for many, their reported frequency has actually dipped slightly over the past few years.Back in 2021, 56% of enterprises surveyed said they’d experienced a breach at some point; that figure has eased to 45% in the 2025 report. Delving deeper, the percentage of respondents reporting a breach within the last 12 months has dropped from 23% in 2021 to a more encouraging 14% in 2025.When it comes to the persistent villains of the threat landscape, malware continues to lead the pack, holding onto its top spot since 2021. Phishing has craftily climbed into second place, nudging ransomware down to third.As for who’s causing the most concern, external actors dominate: hacktivists are currently seen as the primary menace, followed by nation-state actors. Human error, whilst still a significant factor, has slipped to third, down one position from the previous year.Vendors pressed on readiness for quantum threatsThe 2025 Thales Data Threat Report also casts a revealing light on the growing unease within most organisations about quantum-related security risks.The top threat here, cited by a hefty 63% of respondents, is the looming danger of “future encryption compromise.” This is the unsettling prospect that powerful quantum computers could one day shatter current or even future encryption algorithms, exposing data previously thought to be securely locked away. Hot on its heels, 61% identified key distribution vulnerabilities, where quantum breakthroughs could undermine the methods we use to securely exchange encryption keys. Furthermore, 58% highlighted the “harvest now, decrypt later”threat – a chilling scenario where encrypted data, scooped up today, could be decrypted by powerful quantum machines in the future.In response to these gathering clouds, half of the organisations surveyed are taking a hard look at their current encryption strategies with 60% already prototyping or evaluating post-quantum cryptographysolutions. However, it seems trust is a scarce commodity, as only a third are pinning their hopes on telecom or cloud providers to navigate this complex transition for them.Todd Moore, Global VP of Data Security Products at Thales, commented: “The clock is ticking on post-quantum readiness. It’s encouraging that three out of five organisations are already prototyping new ciphers, but deployment timelines are tight and falling behind could leave critical data exposed.“Even with clear timelines for transitioning to PQC algorithms, the pace of encryption change has been slower than expected due to a mix of legacy systems, complexity, and the challenge of balancing innovation with security.”There’s clearly a lot more work to be done to get operational data security truly up to speed, not just to support the advanced capabilities of emerging technologies like generative AI, but also to lay down a secure foundation for whatever threats are just around the corner.Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #thales #quantum #threats #top #security
    Thales: AI and quantum threats top security agendas
    According to Thales, AI and quantum threats have catapulted to the top of the worry list for organisations wrestling with data security. That’s the key takeaway from the cybersecurity giant’s 2025 Data Threat Report, an annual deep dive into the latest data security threats, emerging trends, and hot topics.This year’s findings are stark: almost seven out of ten organisations now see the sheer speed of AI development – especially where generative AI is concerned – as the number one security headache related to its adoption. This anxiety isn’t just about pace; it’s also fed by concerns over a fundamental lack of integrity in AI systemsand a troubling deficit in trustworthiness.Generative AI is a data-hungry beast, relying heavily on high-quality, often sensitive, information for core functions like training models, making inferences, and, of course, generating content.As we make rapid advancements in “agentic AI” – systems that can act more autonomously – the pressure to ensure data quality is high calibre becomes even more critical. After all, sound decisionmaking and reliable actions from AI systems depend entirely on the data they’re fed.Many organisations are already diving in, with a third of respondents indicating generative AI is either being actively integrated or is already a force for transformation within their operations.Security threats increase as organisations embrace generative AIAs generative AI throws up a complex web of data security challenges while simultaneously offering strategic avenues to bolster defences, its growing integration signals a distinct shift. Businesses are moving beyond just dipping their toes in the AI water; they’re now looking at more mature, operational deployments.Interestingly, while most respondents tabbed the swift uptake of GenAI as their biggest security concern, those further along the AI adoption curve aren’t hitting the pause button to completely lock down their systems or fine-tune their tech stacks before forging ahead. This dash for rapid transformation – often overshadowing efforts to ensure organisational readiness – could mean these companies are, perhaps unwittingly, creating their own most serious security weak spots.Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research, said: “The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve.“Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.”On a more positive note, 73% of respondents report they are putting money into AI-specific security tools to counter threats, either through fresh budgets or by reshuffling existing resources. Those making AI security a priority are also diversifying their approaches: over two-thirds have sourced tools from their cloud providers, three in five are turning to established security vendors, and almost half are looking to new or emerging startups for solutions.What’s particularly telling is how quickly security for generative AI has climbed the spending charts, nabbing the second spot in ranked-choice voting, just pipped to the post by the perennial concern of cloud security. This shift powerfully underscores the growing recognition of AI-driven risks and the urgent need for specialised defences to counter them.Data breaches show modest decline, though threats remain elevatedWhile the nightmare of a data breach still looms large for many, their reported frequency has actually dipped slightly over the past few years.Back in 2021, 56% of enterprises surveyed said they’d experienced a breach at some point; that figure has eased to 45% in the 2025 report. Delving deeper, the percentage of respondents reporting a breach within the last 12 months has dropped from 23% in 2021 to a more encouraging 14% in 2025.When it comes to the persistent villains of the threat landscape, malware continues to lead the pack, holding onto its top spot since 2021. Phishing has craftily climbed into second place, nudging ransomware down to third.As for who’s causing the most concern, external actors dominate: hacktivists are currently seen as the primary menace, followed by nation-state actors. Human error, whilst still a significant factor, has slipped to third, down one position from the previous year.Vendors pressed on readiness for quantum threatsThe 2025 Thales Data Threat Report also casts a revealing light on the growing unease within most organisations about quantum-related security risks.The top threat here, cited by a hefty 63% of respondents, is the looming danger of “future encryption compromise.” This is the unsettling prospect that powerful quantum computers could one day shatter current or even future encryption algorithms, exposing data previously thought to be securely locked away. Hot on its heels, 61% identified key distribution vulnerabilities, where quantum breakthroughs could undermine the methods we use to securely exchange encryption keys. Furthermore, 58% highlighted the “harvest now, decrypt later”threat – a chilling scenario where encrypted data, scooped up today, could be decrypted by powerful quantum machines in the future.In response to these gathering clouds, half of the organisations surveyed are taking a hard look at their current encryption strategies with 60% already prototyping or evaluating post-quantum cryptographysolutions. However, it seems trust is a scarce commodity, as only a third are pinning their hopes on telecom or cloud providers to navigate this complex transition for them.Todd Moore, Global VP of Data Security Products at Thales, commented: “The clock is ticking on post-quantum readiness. It’s encouraging that three out of five organisations are already prototyping new ciphers, but deployment timelines are tight and falling behind could leave critical data exposed.“Even with clear timelines for transitioning to PQC algorithms, the pace of encryption change has been slower than expected due to a mix of legacy systems, complexity, and the challenge of balancing innovation with security.”There’s clearly a lot more work to be done to get operational data security truly up to speed, not just to support the advanced capabilities of emerging technologies like generative AI, but also to lay down a secure foundation for whatever threats are just around the corner.Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #thales #quantum #threats #top #security
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Thales: AI and quantum threats top security agendas
    According to Thales, AI and quantum threats have catapulted to the top of the worry list for organisations wrestling with data security. That’s the key takeaway from the cybersecurity giant’s 2025 Data Threat Report, an annual deep dive into the latest data security threats, emerging trends, and hot topics.This year’s findings are stark: almost seven out of ten organisations now see the sheer speed of AI development – especially where generative AI is concerned – as the number one security headache related to its adoption. This anxiety isn’t just about pace; it’s also fed by concerns over a fundamental lack of integrity in AI systems (flagged by 64% of those surveyed) and a troubling deficit in trustworthiness (a worry for 57%).Generative AI is a data-hungry beast, relying heavily on high-quality, often sensitive, information for core functions like training models, making inferences, and, of course, generating content.As we make rapid advancements in “agentic AI” – systems that can act more autonomously – the pressure to ensure data quality is high calibre becomes even more critical. After all, sound decisionmaking and reliable actions from AI systems depend entirely on the data they’re fed.Many organisations are already diving in, with a third of respondents indicating generative AI is either being actively integrated or is already a force for transformation within their operations.Security threats increase as organisations embrace generative AIAs generative AI throws up a complex web of data security challenges while simultaneously offering strategic avenues to bolster defences, its growing integration signals a distinct shift. Businesses are moving beyond just dipping their toes in the AI water; they’re now looking at more mature, operational deployments.Interestingly, while most respondents tabbed the swift uptake of GenAI as their biggest security concern, those further along the AI adoption curve aren’t hitting the pause button to completely lock down their systems or fine-tune their tech stacks before forging ahead. This dash for rapid transformation – often overshadowing efforts to ensure organisational readiness – could mean these companies are, perhaps unwittingly, creating their own most serious security weak spots.Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research, said: “The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve.“Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.”On a more positive note, 73% of respondents report they are putting money into AI-specific security tools to counter threats, either through fresh budgets or by reshuffling existing resources. Those making AI security a priority are also diversifying their approaches: over two-thirds have sourced tools from their cloud providers, three in five are turning to established security vendors, and almost half are looking to new or emerging startups for solutions.What’s particularly telling is how quickly security for generative AI has climbed the spending charts, nabbing the second spot in ranked-choice voting, just pipped to the post by the perennial concern of cloud security. This shift powerfully underscores the growing recognition of AI-driven risks and the urgent need for specialised defences to counter them.Data breaches show modest decline, though threats remain elevatedWhile the nightmare of a data breach still looms large for many, their reported frequency has actually dipped slightly over the past few years.Back in 2021, 56% of enterprises surveyed said they’d experienced a breach at some point; that figure has eased to 45% in the 2025 report. Delving deeper, the percentage of respondents reporting a breach within the last 12 months has dropped from 23% in 2021 to a more encouraging 14% in 2025.When it comes to the persistent villains of the threat landscape, malware continues to lead the pack, holding onto its top spot since 2021. Phishing has craftily climbed into second place, nudging ransomware down to third.As for who’s causing the most concern, external actors dominate: hacktivists are currently seen as the primary menace, followed by nation-state actors. Human error, whilst still a significant factor, has slipped to third, down one position from the previous year.Vendors pressed on readiness for quantum threatsThe 2025 Thales Data Threat Report also casts a revealing light on the growing unease within most organisations about quantum-related security risks.The top threat here, cited by a hefty 63% of respondents, is the looming danger of “future encryption compromise.” This is the unsettling prospect that powerful quantum computers could one day shatter current or even future encryption algorithms, exposing data previously thought to be securely locked away. Hot on its heels, 61% identified key distribution vulnerabilities, where quantum breakthroughs could undermine the methods we use to securely exchange encryption keys. Furthermore, 58% highlighted the “harvest now, decrypt later” (HNDL) threat – a chilling scenario where encrypted data, scooped up today, could be decrypted by powerful quantum machines in the future.In response to these gathering clouds, half of the organisations surveyed are taking a hard look at their current encryption strategies with 60% already prototyping or evaluating post-quantum cryptography (PQC) solutions. However, it seems trust is a scarce commodity, as only a third are pinning their hopes on telecom or cloud providers to navigate this complex transition for them.Todd Moore, Global VP of Data Security Products at Thales, commented: “The clock is ticking on post-quantum readiness. It’s encouraging that three out of five organisations are already prototyping new ciphers, but deployment timelines are tight and falling behind could leave critical data exposed.“Even with clear timelines for transitioning to PQC algorithms, the pace of encryption change has been slower than expected due to a mix of legacy systems, complexity, and the challenge of balancing innovation with security.”There’s clearly a lot more work to be done to get operational data security truly up to speed, not just to support the advanced capabilities of emerging technologies like generative AI, but also to lay down a secure foundation for whatever threats are just around the corner.(Image by Pete Linforth)Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Digital Transformation Week, IoT Tech Expo, Blockchain Expo, and AI & Big Data Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
  • Dell unveils Nvidia Blackwell-based AI acceleration platform

    Dell Technologies used the Dell Technologies World in Las Vegas to announce the latest generation of AI acceleration servers which come equipped with Nvidia’s Blackwell Ultra GPUs.The systems claim to deliver up to four times faster AI training capabilities compared to previous generations, as Dell expands its AI Factory partnership with Nvidia amid intense competition in the enterprise AI hardware market.The servers arrive as organisations move from experimental AI projects to production-scale implementations, creating demand for more sophisticated computing infrastructure.The new lineup features air-cooled PowerEdge XE9780 and XE9785 servers, designed for conventional data centres, and liquid-cooled XE9780L and XE9785L variants, optimised for whole-rack deployment.The advanced systems support configurations with up to 192 Nvidia Blackwell Ultra GPUs with direct-to-chip liquid cooling, expandable to 256 GPUs per Dell IR7000 rack. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, the eponymous chairman and chief executive officer. “Our job is to make AI more accessible. With the Dell AI Factory with Nvidia, enterprises can manage the entire AI lifecycle in use cases, from deployment to training, at any scale.”Dell’s self-designation as “the world’s top provider of AI-centric infrastructure” appears calculated as companies try to deploy AI and navigate technical hurdles.Critical assessment of Dell’s AI hardware strategyWhile Dell’s AI acceleration hardware advancements appear impressive on the basis of tech specs, several factors will ultimately determine their market impact. The company has withheld pricing information for these high-end systems, which will undoubtedly represent substantial capital investments for organisations considering deployment.The cooling infrastructure alone, particularly for the liquid-cooled variants, may need modifications to data centres for many potential customers, adding complexity and cost beyond the server hardware itself.Industry observers note that Dell faces intensifying competition in the AI hardware space from companies like Super Micro Computer, which has aggressively targeted the AI server market with similar offerings.However, Super Micro has recently encountered production cost challenges and margin pressure, potentially creating an opening for Dell if it can deliver competitive pricing.Jensen Huang, founder and CEO of Nvidia, emphasised the transformative potential of these systems: “AI factories are the infrastructure of modern industry, generating intelligence to power work in healthcare, finance and manufacturing. With Dell Technologies, we’re offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge.”Comprehensive AI acceleration ecosystemDell’s AI acceleration strategy extends beyond server hardware to encompass networking, storage, and software components:The networking portfolio now includes the PowerSwitch SN5600 and SN2201 switchesand Nvidia Quantum-X800 InfiniBand switches, capable of up to 800 gigabits per second throughput with Dell ProSupport and Deployment Services.The Dell AI Data Platform has received upgrades to enhance data management for AI applications, including a denser ObjectScale system with Nvidia BlueField-3 and Spectrum-4 networking integrations.In software, Dell offers the Nvidia AI Enterprise software platform directly, featuring Nvidia NIM, NeMo microservices, and Blueprints to streamline AI development workflows.The company also introduced Managed Services for its AI Factory with Nvidia, providing monitoring, reporting, and maintenance to help organisations address expertise gaps – skilled professionals remain in short supply.Availability timeline and market implicationsDell’s AI acceleration platform rollout follows a staggered schedule throughout 2025:Air-cooled PowerEdge XE9780 and XE9785 servers with NVIDIA HGX B300 GPUs will be available in the second half of 2025The liquid-cooled PowerEdge XE9780L and XE9785L variants are expected later this yearThe PowerEdge XE7745 server with Nvidia RTX Pro 6000 Blackwell Server Edition GPUs will launch in July 2025The PowerEdge XE9712 featuring GB300 NVL72 will arrive in the second half of 2025Dell plans to support Nvidia’s Vera CPU and Vera Rubin platform, signalling a longer-term commitment to expanding its AI ecosystem beyond this product lineup.Strategic analysis of the AI acceleration marketDell’s push into AI acceleration hardware reflects a strategy change to capitalise on the artificial intelligence boom, and use its established enterprise customer relationships.As organisations realise the complexity and expense of implementing AI at scale, Dell appears to be positioning itself as a comprehensive solution provider rather than merely a hardware vendor.However, the success of Dell’s AI acceleration initiative will ultimately depend on how effectively systems deliver measurable business value.Organisations investing in high-end infrastructure will demand operational improvements and competitive advantages that justify the significant capital expenditure.The partnership with Nvidia provides Dell access to next-gen AI accelerator technology, but also creates dependency on Nvidia’s supply chain and product roadmap. Given persistent chip shortages and extraordinary demand for AI accelerators, Dell’s ability to secure adequate GPU allocations will prove crucial for meeting customer expectations.See also: Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputerWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #dell #unveils #nvidia #blackwellbased #acceleration
    Dell unveils Nvidia Blackwell-based AI acceleration platform
    Dell Technologies used the Dell Technologies World in Las Vegas to announce the latest generation of AI acceleration servers which come equipped with Nvidia’s Blackwell Ultra GPUs.The systems claim to deliver up to four times faster AI training capabilities compared to previous generations, as Dell expands its AI Factory partnership with Nvidia amid intense competition in the enterprise AI hardware market.The servers arrive as organisations move from experimental AI projects to production-scale implementations, creating demand for more sophisticated computing infrastructure.The new lineup features air-cooled PowerEdge XE9780 and XE9785 servers, designed for conventional data centres, and liquid-cooled XE9780L and XE9785L variants, optimised for whole-rack deployment.The advanced systems support configurations with up to 192 Nvidia Blackwell Ultra GPUs with direct-to-chip liquid cooling, expandable to 256 GPUs per Dell IR7000 rack. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, the eponymous chairman and chief executive officer. “Our job is to make AI more accessible. With the Dell AI Factory with Nvidia, enterprises can manage the entire AI lifecycle in use cases, from deployment to training, at any scale.”Dell’s self-designation as “the world’s top provider of AI-centric infrastructure” appears calculated as companies try to deploy AI and navigate technical hurdles.Critical assessment of Dell’s AI hardware strategyWhile Dell’s AI acceleration hardware advancements appear impressive on the basis of tech specs, several factors will ultimately determine their market impact. The company has withheld pricing information for these high-end systems, which will undoubtedly represent substantial capital investments for organisations considering deployment.The cooling infrastructure alone, particularly for the liquid-cooled variants, may need modifications to data centres for many potential customers, adding complexity and cost beyond the server hardware itself.Industry observers note that Dell faces intensifying competition in the AI hardware space from companies like Super Micro Computer, which has aggressively targeted the AI server market with similar offerings.However, Super Micro has recently encountered production cost challenges and margin pressure, potentially creating an opening for Dell if it can deliver competitive pricing.Jensen Huang, founder and CEO of Nvidia, emphasised the transformative potential of these systems: “AI factories are the infrastructure of modern industry, generating intelligence to power work in healthcare, finance and manufacturing. With Dell Technologies, we’re offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge.”Comprehensive AI acceleration ecosystemDell’s AI acceleration strategy extends beyond server hardware to encompass networking, storage, and software components:The networking portfolio now includes the PowerSwitch SN5600 and SN2201 switchesand Nvidia Quantum-X800 InfiniBand switches, capable of up to 800 gigabits per second throughput with Dell ProSupport and Deployment Services.The Dell AI Data Platform has received upgrades to enhance data management for AI applications, including a denser ObjectScale system with Nvidia BlueField-3 and Spectrum-4 networking integrations.In software, Dell offers the Nvidia AI Enterprise software platform directly, featuring Nvidia NIM, NeMo microservices, and Blueprints to streamline AI development workflows.The company also introduced Managed Services for its AI Factory with Nvidia, providing monitoring, reporting, and maintenance to help organisations address expertise gaps – skilled professionals remain in short supply.Availability timeline and market implicationsDell’s AI acceleration platform rollout follows a staggered schedule throughout 2025:Air-cooled PowerEdge XE9780 and XE9785 servers with NVIDIA HGX B300 GPUs will be available in the second half of 2025The liquid-cooled PowerEdge XE9780L and XE9785L variants are expected later this yearThe PowerEdge XE7745 server with Nvidia RTX Pro 6000 Blackwell Server Edition GPUs will launch in July 2025The PowerEdge XE9712 featuring GB300 NVL72 will arrive in the second half of 2025Dell plans to support Nvidia’s Vera CPU and Vera Rubin platform, signalling a longer-term commitment to expanding its AI ecosystem beyond this product lineup.Strategic analysis of the AI acceleration marketDell’s push into AI acceleration hardware reflects a strategy change to capitalise on the artificial intelligence boom, and use its established enterprise customer relationships.As organisations realise the complexity and expense of implementing AI at scale, Dell appears to be positioning itself as a comprehensive solution provider rather than merely a hardware vendor.However, the success of Dell’s AI acceleration initiative will ultimately depend on how effectively systems deliver measurable business value.Organisations investing in high-end infrastructure will demand operational improvements and competitive advantages that justify the significant capital expenditure.The partnership with Nvidia provides Dell access to next-gen AI accelerator technology, but also creates dependency on Nvidia’s supply chain and product roadmap. Given persistent chip shortages and extraordinary demand for AI accelerators, Dell’s ability to secure adequate GPU allocations will prove crucial for meeting customer expectations.See also: Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputerWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #dell #unveils #nvidia #blackwellbased #acceleration
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Dell unveils Nvidia Blackwell-based AI acceleration platform
    Dell Technologies used the Dell Technologies World in Las Vegas to announce the latest generation of AI acceleration servers which come equipped with Nvidia’s Blackwell Ultra GPUs.The systems claim to deliver up to four times faster AI training capabilities compared to previous generations, as Dell expands its AI Factory partnership with Nvidia amid intense competition in the enterprise AI hardware market.The servers arrive as organisations move from experimental AI projects to production-scale implementations, creating demand for more sophisticated computing infrastructure.The new lineup features air-cooled PowerEdge XE9780 and XE9785 servers, designed for conventional data centres, and liquid-cooled XE9780L and XE9785L variants, optimised for whole-rack deployment.The advanced systems support configurations with up to 192 Nvidia Blackwell Ultra GPUs with direct-to-chip liquid cooling, expandable to 256 GPUs per Dell IR7000 rack. “We’re on a mission to bring AI to millions of customers around the world,” said Michael Dell, the eponymous chairman and chief executive officer. “Our job is to make AI more accessible. With the Dell AI Factory with Nvidia, enterprises can manage the entire AI lifecycle in use cases, from deployment to training, at any scale.”Dell’s self-designation as “the world’s top provider of AI-centric infrastructure” appears calculated as companies try to deploy AI and navigate technical hurdles.Critical assessment of Dell’s AI hardware strategyWhile Dell’s AI acceleration hardware advancements appear impressive on the basis of tech specs, several factors will ultimately determine their market impact. The company has withheld pricing information for these high-end systems, which will undoubtedly represent substantial capital investments for organisations considering deployment.The cooling infrastructure alone, particularly for the liquid-cooled variants, may need modifications to data centres for many potential customers, adding complexity and cost beyond the server hardware itself.Industry observers note that Dell faces intensifying competition in the AI hardware space from companies like Super Micro Computer, which has aggressively targeted the AI server market with similar offerings.However, Super Micro has recently encountered production cost challenges and margin pressure, potentially creating an opening for Dell if it can deliver competitive pricing.Jensen Huang, founder and CEO of Nvidia, emphasised the transformative potential of these systems: “AI factories are the infrastructure of modern industry, generating intelligence to power work in healthcare, finance and manufacturing. With Dell Technologies, we’re offering the broadest line of Blackwell AI systems to serve AI factories in clouds, enterprises and at the edge.”Comprehensive AI acceleration ecosystemDell’s AI acceleration strategy extends beyond server hardware to encompass networking, storage, and software components:The networking portfolio now includes the PowerSwitch SN5600 and SN2201 switches (part of Nvidia’s Spectrum-X platform) and Nvidia Quantum-X800 InfiniBand switches, capable of up to 800 gigabits per second throughput with Dell ProSupport and Deployment Services.The Dell AI Data Platform has received upgrades to enhance data management for AI applications, including a denser ObjectScale system with Nvidia BlueField-3 and Spectrum-4 networking integrations.In software, Dell offers the Nvidia AI Enterprise software platform directly, featuring Nvidia NIM, NeMo microservices, and Blueprints to streamline AI development workflows.The company also introduced Managed Services for its AI Factory with Nvidia, providing monitoring, reporting, and maintenance to help organisations address expertise gaps – skilled professionals remain in short supply.Availability timeline and market implicationsDell’s AI acceleration platform rollout follows a staggered schedule throughout 2025:Air-cooled PowerEdge XE9780 and XE9785 servers with NVIDIA HGX B300 GPUs will be available in the second half of 2025The liquid-cooled PowerEdge XE9780L and XE9785L variants are expected later this yearThe PowerEdge XE7745 server with Nvidia RTX Pro 6000 Blackwell Server Edition GPUs will launch in July 2025The PowerEdge XE9712 featuring GB300 NVL72 will arrive in the second half of 2025Dell plans to support Nvidia’s Vera CPU and Vera Rubin platform, signalling a longer-term commitment to expanding its AI ecosystem beyond this product lineup.Strategic analysis of the AI acceleration marketDell’s push into AI acceleration hardware reflects a strategy change to capitalise on the artificial intelligence boom, and use its established enterprise customer relationships.As organisations realise the complexity and expense of implementing AI at scale, Dell appears to be positioning itself as a comprehensive solution provider rather than merely a hardware vendor.However, the success of Dell’s AI acceleration initiative will ultimately depend on how effectively systems deliver measurable business value.Organisations investing in high-end infrastructure will demand operational improvements and competitive advantages that justify the significant capital expenditure.The partnership with Nvidia provides Dell access to next-gen AI accelerator technology, but also creates dependency on Nvidia’s supply chain and product roadmap. Given persistent chip shortages and extraordinary demand for AI accelerators, Dell’s ability to secure adequate GPU allocations will prove crucial for meeting customer expectations.(Photo by Nvidia)See also: Dell, Intel and University of Cambridge deploy the UK’s fastest AI supercomputerWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commentarii 0 Distribuiri
Mai multe povesti
CGShares https://cgshares.com