• Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration

    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures.
    These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations.
    For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type.
    These factors directly affect network performance, user experience and energy consumption.
    To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration.
    At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos.
    Automate Network Configuration With the AI Blueprint
    NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices.
    The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI.
    This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures.
    Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies.
    The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input.
    Powered and Deployed by Industry Leaders
    Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience.
    With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes.
    Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond.
    “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.”
    Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies
    The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality.
    Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences.
    NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing.
    Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference.
    For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos.
    Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems.
    Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing.
    The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making.
    Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations.
    ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy.
    Get started with the new blueprint today.
    Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    #calling #llms #new #nvidia #blueprint
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA. #calling #llms #new #nvidia #blueprint
    BLOGS.NVIDIA.COM
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly $295 billion in capital expenditures and over $1 trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance (ISNA), designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    Like
    Love
    Wow
    Sad
    Angry
    80
    0 Reacties 0 aandelen
  • European Broadcasting Union and NVIDIA Partner on Sovereign AI to Support Public Broadcasters

    In a new effort to advance sovereign AI for European public service media, NVIDIA and the European Broadcasting Unionare working together to give the media industry access to high-quality and trusted cloud and AI technologies.
    Announced at NVIDIA GTC Paris at VivaTech, NVIDIA’s collaboration with the EBU — the world’s leading alliance of public service media with more than 110 member organizations in 50+ countries, reaching an audience of over 1 billion — focuses on helping build sovereign AI and cloud frameworks, driving workforce development and cultivating an AI ecosystem to create a more equitable, accessible and resilient European media landscape.
    The work will create better foundations for public service media to benefit from European cloud infrastructure and AI services that are exclusively governed by European policy, comply with European data protection and privacy rules, and embody European values.
    Sovereign AI ensures nations can develop and deploy artificial intelligence using local infrastructure, datasets and expertise. By investing in it, European countries can preserve their cultural identity, enhance public trust and support innovation specific to their needs.
    “We are proud to collaborate with NVIDIA to drive the development of sovereign AI and cloud services,” said Michael Eberhard, chief technology officer of public broadcaster ARD/SWR, and chair of the EBU Technical Committee. “By advancing these capabilities together, we’re helping ensure that powerful, compliant and accessible media services are made available to all EBU members — powering innovation, resilience and strategic autonomy across the board.”

    Empowering Media Innovation in Europe
    To support the development of sovereign AI technologies, NVIDIA and the EBU will establish frameworks that prioritize independence and public trust, helping ensure that AI serves the interests of Europeans while preserving the autonomy of media organizations.
    Through this collaboration, NVIDIA and the EBU will develop hybrid cloud architectures designed to meet the highest standards of European public service media. The EBU will contribute its Dynamic Media Facilityand Media eXchange Layerarchitecture, aiming to enable interoperability and scalability for workflows, as well as cost- and energy-efficient AI training and inference. Following open-source principles, this work aims to create an accessible, dynamic technology ecosystem.
    The collaboration will also provide public service media companies with the tools to deliver personalized, contextually relevant services and content recommendation systems, with a focus on transparency, accountability and cultural identity. This will be realized through investment in sovereign cloud and AI infrastructure and software platforms such as NVIDIA AI Enterprise, custom foundation models, large language models trained with local data, and retrieval-augmented generation technologies.
    As part of the collaboration, NVIDIA is also making available resources from its Deep Learning Institute, offering European media organizations comprehensive training programs to create an AI-ready workforce. This will support the EBU’s efforts to help ensure news integrity in the age of AI.
    In addition, the EBU and its partners are investing in local data centers and cloud platforms that support sovereign technologies, such as NVIDIA GB200 Grace Blackwell Superchip, NVIDIA RTX PRO Servers, NVIDIA DGX Cloud and NVIDIA Holoscan for Media — helping members of the union achieve secure and cost- and energy-efficient AI training, while promoting AI research and development.
    Partnering With Public Service Media for Sovereign Cloud and AI
    Collaboration within the media sector is essential for the development and application of comprehensive standards and best practices that ensure the creation and deployment of sovereign European cloud and AI.
    By engaging with independent software vendors, data center providers, cloud service providers and original equipment manufacturers, NVIDIA and the EBU aim to create a unified approach to sovereign cloud and AI.
    This work will also facilitate discussions between the cloud and AI industry and European regulators, helping ensure the development of practical solutions that benefit both the general public and media organizations.
    “Building sovereign cloud and AI capabilities based on EBU’s Dynamic Media Facility and Media eXchange Layer architecture requires strong cross-industry collaboration,” said Antonio Arcidiacono, chief technology and innovation officer at the EBU. “By collaborating with NVIDIA, as well as a broad ecosystem of media technology partners, we are fostering a shared foundation for trust, innovation and resilience that supports the growth of European media.”
    Learn more about the EBU.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. 
    #european #broadcasting #union #nvidia #partner
    European Broadcasting Union and NVIDIA Partner on Sovereign AI to Support Public Broadcasters
    In a new effort to advance sovereign AI for European public service media, NVIDIA and the European Broadcasting Unionare working together to give the media industry access to high-quality and trusted cloud and AI technologies. Announced at NVIDIA GTC Paris at VivaTech, NVIDIA’s collaboration with the EBU — the world’s leading alliance of public service media with more than 110 member organizations in 50+ countries, reaching an audience of over 1 billion — focuses on helping build sovereign AI and cloud frameworks, driving workforce development and cultivating an AI ecosystem to create a more equitable, accessible and resilient European media landscape. The work will create better foundations for public service media to benefit from European cloud infrastructure and AI services that are exclusively governed by European policy, comply with European data protection and privacy rules, and embody European values. Sovereign AI ensures nations can develop and deploy artificial intelligence using local infrastructure, datasets and expertise. By investing in it, European countries can preserve their cultural identity, enhance public trust and support innovation specific to their needs. “We are proud to collaborate with NVIDIA to drive the development of sovereign AI and cloud services,” said Michael Eberhard, chief technology officer of public broadcaster ARD/SWR, and chair of the EBU Technical Committee. “By advancing these capabilities together, we’re helping ensure that powerful, compliant and accessible media services are made available to all EBU members — powering innovation, resilience and strategic autonomy across the board.” Empowering Media Innovation in Europe To support the development of sovereign AI technologies, NVIDIA and the EBU will establish frameworks that prioritize independence and public trust, helping ensure that AI serves the interests of Europeans while preserving the autonomy of media organizations. Through this collaboration, NVIDIA and the EBU will develop hybrid cloud architectures designed to meet the highest standards of European public service media. The EBU will contribute its Dynamic Media Facilityand Media eXchange Layerarchitecture, aiming to enable interoperability and scalability for workflows, as well as cost- and energy-efficient AI training and inference. Following open-source principles, this work aims to create an accessible, dynamic technology ecosystem. The collaboration will also provide public service media companies with the tools to deliver personalized, contextually relevant services and content recommendation systems, with a focus on transparency, accountability and cultural identity. This will be realized through investment in sovereign cloud and AI infrastructure and software platforms such as NVIDIA AI Enterprise, custom foundation models, large language models trained with local data, and retrieval-augmented generation technologies. As part of the collaboration, NVIDIA is also making available resources from its Deep Learning Institute, offering European media organizations comprehensive training programs to create an AI-ready workforce. This will support the EBU’s efforts to help ensure news integrity in the age of AI. In addition, the EBU and its partners are investing in local data centers and cloud platforms that support sovereign technologies, such as NVIDIA GB200 Grace Blackwell Superchip, NVIDIA RTX PRO Servers, NVIDIA DGX Cloud and NVIDIA Holoscan for Media — helping members of the union achieve secure and cost- and energy-efficient AI training, while promoting AI research and development. Partnering With Public Service Media for Sovereign Cloud and AI Collaboration within the media sector is essential for the development and application of comprehensive standards and best practices that ensure the creation and deployment of sovereign European cloud and AI. By engaging with independent software vendors, data center providers, cloud service providers and original equipment manufacturers, NVIDIA and the EBU aim to create a unified approach to sovereign cloud and AI. This work will also facilitate discussions between the cloud and AI industry and European regulators, helping ensure the development of practical solutions that benefit both the general public and media organizations. “Building sovereign cloud and AI capabilities based on EBU’s Dynamic Media Facility and Media eXchange Layer architecture requires strong cross-industry collaboration,” said Antonio Arcidiacono, chief technology and innovation officer at the EBU. “By collaborating with NVIDIA, as well as a broad ecosystem of media technology partners, we are fostering a shared foundation for trust, innovation and resilience that supports the growth of European media.” Learn more about the EBU. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.  #european #broadcasting #union #nvidia #partner
    BLOGS.NVIDIA.COM
    European Broadcasting Union and NVIDIA Partner on Sovereign AI to Support Public Broadcasters
    In a new effort to advance sovereign AI for European public service media, NVIDIA and the European Broadcasting Union (EBU) are working together to give the media industry access to high-quality and trusted cloud and AI technologies. Announced at NVIDIA GTC Paris at VivaTech, NVIDIA’s collaboration with the EBU — the world’s leading alliance of public service media with more than 110 member organizations in 50+ countries, reaching an audience of over 1 billion — focuses on helping build sovereign AI and cloud frameworks, driving workforce development and cultivating an AI ecosystem to create a more equitable, accessible and resilient European media landscape. The work will create better foundations for public service media to benefit from European cloud infrastructure and AI services that are exclusively governed by European policy, comply with European data protection and privacy rules, and embody European values. Sovereign AI ensures nations can develop and deploy artificial intelligence using local infrastructure, datasets and expertise. By investing in it, European countries can preserve their cultural identity, enhance public trust and support innovation specific to their needs. “We are proud to collaborate with NVIDIA to drive the development of sovereign AI and cloud services,” said Michael Eberhard, chief technology officer of public broadcaster ARD/SWR, and chair of the EBU Technical Committee. “By advancing these capabilities together, we’re helping ensure that powerful, compliant and accessible media services are made available to all EBU members — powering innovation, resilience and strategic autonomy across the board.” Empowering Media Innovation in Europe To support the development of sovereign AI technologies, NVIDIA and the EBU will establish frameworks that prioritize independence and public trust, helping ensure that AI serves the interests of Europeans while preserving the autonomy of media organizations. Through this collaboration, NVIDIA and the EBU will develop hybrid cloud architectures designed to meet the highest standards of European public service media. The EBU will contribute its Dynamic Media Facility (DMF) and Media eXchange Layer (MXL) architecture, aiming to enable interoperability and scalability for workflows, as well as cost- and energy-efficient AI training and inference. Following open-source principles, this work aims to create an accessible, dynamic technology ecosystem. The collaboration will also provide public service media companies with the tools to deliver personalized, contextually relevant services and content recommendation systems, with a focus on transparency, accountability and cultural identity. This will be realized through investment in sovereign cloud and AI infrastructure and software platforms such as NVIDIA AI Enterprise, custom foundation models, large language models trained with local data, and retrieval-augmented generation technologies. As part of the collaboration, NVIDIA is also making available resources from its Deep Learning Institute, offering European media organizations comprehensive training programs to create an AI-ready workforce. This will support the EBU’s efforts to help ensure news integrity in the age of AI. In addition, the EBU and its partners are investing in local data centers and cloud platforms that support sovereign technologies, such as NVIDIA GB200 Grace Blackwell Superchip, NVIDIA RTX PRO Servers, NVIDIA DGX Cloud and NVIDIA Holoscan for Media — helping members of the union achieve secure and cost- and energy-efficient AI training, while promoting AI research and development. Partnering With Public Service Media for Sovereign Cloud and AI Collaboration within the media sector is essential for the development and application of comprehensive standards and best practices that ensure the creation and deployment of sovereign European cloud and AI. By engaging with independent software vendors, data center providers, cloud service providers and original equipment manufacturers, NVIDIA and the EBU aim to create a unified approach to sovereign cloud and AI. This work will also facilitate discussions between the cloud and AI industry and European regulators, helping ensure the development of practical solutions that benefit both the general public and media organizations. “Building sovereign cloud and AI capabilities based on EBU’s Dynamic Media Facility and Media eXchange Layer architecture requires strong cross-industry collaboration,” said Antonio Arcidiacono, chief technology and innovation officer at the EBU. “By collaborating with NVIDIA, as well as a broad ecosystem of media technology partners, we are fostering a shared foundation for trust, innovation and resilience that supports the growth of European media.” Learn more about the EBU. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. 
    Like
    Love
    Wow
    Sad
    Angry
    35
    0 Reacties 0 aandelen
  • NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 Reacties 0 aandelen
  • How to optimize your hybrid waterfall with CPM buckets

    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets,you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like to larger increments like so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfalland make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the -80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a -80 CPM, imagine how much more revenue you could bring at a CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue."

    -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here.
    #how #optimize #your #hybrid #waterfall
    How to optimize your hybrid waterfall with CPM buckets
    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets,you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like to larger increments like so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfalland make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the -80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a -80 CPM, imagine how much more revenue you could bring at a CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue." -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here. #how #optimize #your #hybrid #waterfall
    UNITY.COM
    How to optimize your hybrid waterfall with CPM buckets
    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets, (for example, seeing all the ad networks generating a CPM of $10-$20) you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like $1 to larger increments like $10, so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfall (filter app, OS, Ad unit, geo/geos from a specific group) and make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the $70-80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a $70-80 CPM, imagine how much more revenue you could bring at a $150 CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of $50, but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue." -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here.
    Like
    Love
    Wow
    Sad
    Angry
    544
    0 Reacties 0 aandelen
  • Nolah Evolution Hybrid Mattress Review: A Jack of All Trades

    The Nolah Evolution Hybrid mattress is a boon for side sleepers and back pain sufferers—plus, it has handles.
    #nolah #evolution #hybrid #mattress #review
    Nolah Evolution Hybrid Mattress Review: A Jack of All Trades
    The Nolah Evolution Hybrid mattress is a boon for side sleepers and back pain sufferers—plus, it has handles. #nolah #evolution #hybrid #mattress #review
    WWW.WIRED.COM
    Nolah Evolution Hybrid Mattress Review: A Jack of All Trades
    The Nolah Evolution Hybrid mattress is a boon for side sleepers and back pain sufferers—plus, it has handles.
    Like
    Love
    Wow
    Angry
    Sad
    533
    0 Reacties 0 aandelen
  • CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"

    DriftingSpirit
    Member

    Oct 25, 2017

    18,563

    They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions.

    4:15 for console focus and 60fps
    38:50 for the Series S comment 

    bsigg
    Member

    Oct 25, 2017

    25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview



    www.resetera.com

     

    Skot
    Member

    Oct 30, 2017

    645

    720p on Series S incoming
     

    Bulby
    Prophet of Truth
    Member

    Oct 29, 2017

    6,006

    Berlin

    I think think any series s user will be happy with a beautiful 900p 30fps
     

    Chronos
    Member

    Oct 27, 2017

    1,249

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.
     

    HellofaMouse
    Member

    Oct 27, 2017

    8,551

    i wonder if this'll come out before the gen is over?

    good chance itll be a 2077 situation, cross-gen release with a broken ps6 version 

    logash
    Member

    Oct 27, 2017

    6,526

    This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.
     

    KRT
    Member

    Aug 7, 2020

    247

    Series S was a mistake
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.
     

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Bulby said:

    I think think any series s user will be happy with a beautiful 900p 30fps

    Click to expand...
    Click to shrink...

     

    Yuuber
    Member

    Oct 28, 2017

    4,540

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2. 

    MANTRA
    Member

    Feb 21, 2024

    1,198

    No one who cares about 60fps should be buying a Series S, just make it 30fps.
     

    Roytheone
    Member

    Oct 25, 2017

    6,185

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed. 

    Matterhorn
    Member

    Feb 6, 2019

    254

    United States

    Hoping for a very nice looking 30fps Switch 2 version.
     

    Universal Acclaim
    Member

    Oct 5, 2024

    2,617

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    Matterhorn said:

    Hoping for a very nice looking 30fps Switch 2 version.

    Click to expand...
    Click to shrink...

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2. 

    Last edited: Yesterday at 4:18 PM

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Universal Acclaim said:

    Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps?

    Click to expand...
    Click to shrink...

    Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.
     

    Greywaren
    Member

    Jul 16, 2019

    13,530

    Spain

    60 fps target is fantastic, I wish it was the norm.
     

    julia crawford
    Took the red AND the blue pills
    Member

    Oct 27, 2017

    40,709

    i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.
     

    Spoit
    Member

    Oct 28, 2017

    5,599

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back
     

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.

    Click to expand...
    Click to shrink...

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.
    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further. 

    overthewaves
    Member

    Sep 30, 2020

    1,203

    What about the PS5 handheld?
     

    nullpotential
    Member

    Jun 24, 2024

    87

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    Consoles were a mistake. 

    GPU
    Member

    Oct 10, 2024

    1,075

    I really dont think Series S/X will be much of a factor by the time this game comes out.
     

    Lashley
    <<Tag Here>>
    Member

    Oct 25, 2017

    65,679

    Just make series s 480p 30fps
     

    pappacone
    Member

    Jan 10, 2020

    4,076

    Greywaren said:

    60 fps target is fantastic, I wish it was the norm.

    Click to expand...
    Click to shrink...

    It pretty much is
     

    Super
    Studied the Buster Sword
    Member

    Jan 29, 2022

    13,601

    I hope they can pull 60 FPS off in the full game.
     

    Theorry
    Member

    Oct 27, 2017

    69,045

    "target"

    Uh huh. We know how that is gonna go. 

    Jakartalado
    Member

    Oct 27, 2017

    2,818

    São Paulo, Brazil

    Skot said:

    720p on Series S incoming

    Click to expand...
    Click to shrink...

    If the PS5 is internally at 720p up to 900p, I seriously doubt that. 

    Revoltoftheunique
    Member

    Jan 23, 2022

    2,312

    It will be unstable 60fps with lots of stuttering.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    KRT said:

    Series S was a mistake

    Click to expand...
    Click to shrink...

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.
     

    Horns
    Member

    Dec 7, 2018

    3,423

    I hope Microsoft drops the requirement for Series S by the time this comes out.
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    PLASTICA-MAN said:

    There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too.

    Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced.
    UE5 can still trace shadows if they want to push things even further.
    Click to expand...
    Click to shrink...

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    Spoit said:

    And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back

    Click to expand...
    Click to shrink...

    Has it been confirmed that Sony is going to have release requirements like the XS?
     

    Commander Shepherd
    Member

    Jan 27, 2023

    173

    Anyone remember when no load screens was talked about for Witcher 3?
     

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode.

    This is not the other way around. 

    stanman
    Member

    Feb 13, 2025

    235

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    And your mistake is comparing a PC graphics card to a console. 

    PLASTICA-MAN
    Member

    Oct 26, 2017

    29,563

    chris 1515 said:

    Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.

    Click to expand...
    Click to shrink...

    Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS. 

    ArchedThunder
    Uncle Beerus
    Member

    Oct 25, 2017

    21,278

    chris 1515 said:

    It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version.

    EDIT: And they probably need to redo all the assets.

    /

    Fortnite doesn't use Nanite and Lumen on Switch 2.
    Click to expand...
    Click to shrink...

    Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    Interesting times ahead....

    bitcloudrzr said:

    Has it been confirmed that Sony is going to have release requirements like the XS?

    Click to expand...
    Click to shrink...

    Your know good n well everything about this rumor has been confirmed.

    /S 

    Derbel McDillet
    ▲ Legend ▲
    Member

    Nov 23, 2022

    25,250

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    stanman said:

    And your mistake is comparing a PC graphics card to a console.

    Click to expand...
    Click to shrink...

     

    reksveks
    Member

    May 17, 2022

    7,628

    Horns said:

    I hope Microsoft drops the requirement for Series S by the time this comes out.

    Click to expand...
    Click to shrink...

    why? dev can make it 30 fps on series s and 60 fps on series x if needed.

    if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4. 

    chris 1515
    Member

    Oct 27, 2017

    7,116

    Barcelona Spain

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. 

    bitcloudrzr
    Member

    May 31, 2018

    21,044

    jroc74 said:

    Interesting times ahead....

    Your know good n well everything about this rumor has been confirmed.

    /S
    Click to expand...
    Click to shrink...

    Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    chris 1515 said:

    No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version?

    If the game was made with software lumen as the base it would have holding back your 5090...

    Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.
    Click to expand...
    Click to shrink...

    Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.
     

    cursed beef
    Member

    Jan 3, 2021

    998

    Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?
     

    Alvis
    Saw the truth behind the copied door
    Member

    Oct 25, 2017

    12,270

    EU

    Chronos said:

    This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.

    Click to expand...
    Click to shrink...

    ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS.

    The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation. 

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    misqoute post
     

    jroc74
    Member

    Oct 27, 2017

    34,465

    defaltoption said:

    With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.

    Click to expand...
    Click to shrink...

    Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games.

    How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck.

    At least ppl saying that about the Series S are comparing it to other consoles.

    That said, it is interesting they are focusing on consoles first, then PC. 
    #projekt #red #tw4 #has #console
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balancedand 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. / Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalightand better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC.  #projekt #red #tw4 #has #console
    WWW.RESETERA.COM
    CD Projekt RED: TW4 has console first development with a 60fps target; 60fps on Series S will be "extremely challenging"
    DriftingSpirit Member Oct 25, 2017 18,563 They note how they usually start with PC and scale down, but they will be doing it the other way around this time to avoid issues with the console versions. 4:15 for console focus and 60fps 38:50 for the Series S comment  bsigg Member Oct 25, 2017 25,153 [DF] Inside The Witcher 4 Unreal Engine 5 Tech Demo: CD Projekt RED + Epic Deep Dive Interview https://www.youtube.com/watch?v=OplYN2MMI4Q www.resetera.com   Skot Member Oct 30, 2017 645 720p on Series S incoming   Bulby Prophet of Truth Member Oct 29, 2017 6,006 Berlin I think think any series s user will be happy with a beautiful 900p 30fps   Chronos Member Oct 27, 2017 1,249 This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation.   HellofaMouse Member Oct 27, 2017 8,551 i wonder if this'll come out before the gen is over? good chance itll be a 2077 situation, cross-gen release with a broken ps6 version  logash Member Oct 27, 2017 6,526 This makes sense since they want to have good performance on lower end machines and they mentioned that it was easier to scale up than to scale down. They also mentioned their legacy on PC and how they plan on scaling it up high like they usually do on PC.   KRT Member Aug 7, 2020 247 Series S was a mistake   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S.   bitcloudrzr Member May 31, 2018 21,044 Bulby said: I think think any series s user will be happy with a beautiful 900p 30fps Click to expand... Click to shrink...   Yuuber Member Oct 28, 2017 4,540 KRT said: Series S was a mistake Click to expand... Click to shrink... Can we stop with these stupid takes? For all we know it sold as much as Series X, helped several games have better optimization on bigger consoles and it will definitely help optimizing newer games to the Nintendo Switch 2.  MANTRA Member Feb 21, 2024 1,198 No one who cares about 60fps should be buying a Series S, just make it 30fps.   Roytheone Member Oct 25, 2017 6,185 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... They can just go for 30 fps instead on the Series S. No need for a special deal for that, that's allowed.  Matterhorn Member Feb 6, 2019 254 United States Hoping for a very nice looking 30fps Switch 2 version.   Universal Acclaim Member Oct 5, 2024 2,617 Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the game can't be scaled down to 720-900p/60fps?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain Matterhorn said: Hoping for a very nice looking 30fps Switch 2 version. Click to expand... Click to shrink... It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2.  Last edited: Yesterday at 4:18 PM bitcloudrzr Member May 31, 2018 21,044 Universal Acclaim said: Maybe off topic, but is 30fps target not so important anymore for 2027 industry-leading graphics? GTA is mainly doing it for design/physics/etc. whch is why the graphics can't be scaled down to 720p/60fps? Click to expand... Click to shrink... Graphics are the part of the game that can be scaled, it is CPU load that is the more difficult part, although devs have actually made cuts in the latter to increase performance mode fps viability. Even with this focus on 60fps performance modes, they are always going to have room to make a higher fidelity 30fps mode. Specifically with UE5 though, performance has been such a disaster all around and Epic seems to be taking it seriously now.   Greywaren Member Jul 16, 2019 13,530 Spain 60 fps target is fantastic, I wish it was the norm.   julia crawford Took the red AND the blue pills Member Oct 27, 2017 40,709 i am very ok with lower fps on the series s, it is far more palatable than severe resolution drops with upscaling artifacts.   Spoit Member Oct 28, 2017 5,599 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back   PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: The game have raytracing GI and reflection it will probably be 30 fps 600p-720p on Xbox Series S. Click to expand... Click to shrink... There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further.  overthewaves Member Sep 30, 2020 1,203 What about the PS5 handheld?   nullpotential Member Jun 24, 2024 87 KRT said: Series S was a mistake Click to expand... Click to shrink... Consoles were a mistake.  GPU Member Oct 10, 2024 1,075 I really dont think Series S/X will be much of a factor by the time this game comes out.   Lashley <<Tag Here>> Member Oct 25, 2017 65,679 Just make series s 480p 30fps   pappacone Member Jan 10, 2020 4,076 Greywaren said: 60 fps target is fantastic, I wish it was the norm. Click to expand... Click to shrink... It pretty much is   Super Studied the Buster Sword Member Jan 29, 2022 13,601 I hope they can pull 60 FPS off in the full game.   Theorry Member Oct 27, 2017 69,045 "target" Uh huh. We know how that is gonna go.  Jakartalado Member Oct 27, 2017 2,818 São Paulo, Brazil Skot said: 720p on Series S incoming Click to expand... Click to shrink... If the PS5 is internally at 720p up to 900p, I seriously doubt that.  Revoltoftheunique Member Jan 23, 2022 2,312 It will be unstable 60fps with lots of stuttering.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin KRT said: Series S was a mistake Click to expand... Click to shrink... With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid.   Horns Member Dec 7, 2018 3,423 I hope Microsoft drops the requirement for Series S by the time this comes out.   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain PLASTICA-MAN said: There is kinda a misconception of how Lumen and the hybrid RT is handled in UE5 titles. AO is also part of the ray traced pipeline through the HW Lumen too. Just shadows are handled separately from the RT system by using VSM which in final look behvae quite like RT shadows in shape, same how FF16 handled the shadows looking like RT ones while it isn't traced. UE5 can still trace shadows if they want to push things even further. Click to expand... Click to shrink... Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S.  bitcloudrzr Member May 31, 2018 21,044 Spoit said: And yet people keep talking about somehow getting PS6 games to work on the sony portable, which is probably going to be like half as powerful as a PS5, like that won't hold games back Click to expand... Click to shrink... Has it been confirmed that Sony is going to have release requirements like the XS?   Commander Shepherd Member Jan 27, 2023 173 Anyone remember when no load screens was talked about for Witcher 3?   chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain No this is probably different than most game are doing it here the main focus is the 60 fps mode and after they can create a balanced(40 fps) and 30 fps mode. This is not the other way around.  stanman Member Feb 13, 2025 235 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... And your mistake is comparing a PC graphics card to a console.  PLASTICA-MAN Member Oct 26, 2017 29,563 chris 1515 said: Yes indirect shadows are handled by hardware lumen. But at the end ot doesn¡t change my comment. i think the game will be 600´720p at 30 fps on Series S. Click to expand... Click to shrink... Yes. I am sure Series S will have HW solution but probably at 30 FPS. that would be a miracle if they achieve 60 FPS.  ArchedThunder Uncle Beerus Member Oct 25, 2017 21,278 chris 1515 said: It will be a full port a few years after like The Witcher 3., they don't use software lumen here. I doubt the Switch 2 Raytracing capaclity is high enough to use the same pipeline to produce the Switch 2 version. EDIT: And they probably need to redo all the assets. https://www.reddit.com/r/FortNiteBR/comments/1l4a1o4/fortnite_on_the_switch_2_looks_great_these_low/ Fortnite doesn't use Nanite and Lumen on Switch 2. Click to expand... Click to shrink... Fortnite not using Lumen or Nanite at launch doesn't mean they can't run well on Switch 2. It's a launch port and they prioritized clean IQ and 60fps. I wouldn't be surprised to see them added later. Also it's not like the ray tracing in a Witcher 3 port has to match PS5, there's a lot of scaling back that can be done with ray tracing without ripping out the kitchen sink. Software lumen is also likely to be an option on P.   jroc74 Member Oct 27, 2017 34,465 Interesting times ahead.... bitcloudrzr said: Has it been confirmed that Sony is going to have release requirements like the XS? Click to expand... Click to shrink... Your know good n well everything about this rumor has been confirmed. /S  Derbel McDillet ▲ Legend ▲ Member Nov 23, 2022 25,250 Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... How does this sound like a Cyberpunk issue? They didn't say they can't get it to work on the S.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin stanman said: And your mistake is comparing a PC graphics card to a console. Click to expand... Click to shrink...   reksveks Member May 17, 2022 7,628 Horns said: I hope Microsoft drops the requirement for Series S by the time this comes out. Click to expand... Click to shrink... why? dev can make it 30 fps on series s and 60 fps on series x if needed. if they aren't or don't have to drop it for gta vi, they probably ain't dropping it for tw4.  chris 1515 Member Oct 27, 2017 7,116 Barcelona Spain defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... No the consoles won't hold back your 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows with tons of lighe source) and better raytracing settings in general.  bitcloudrzr Member May 31, 2018 21,044 jroc74 said: Interesting times ahead.... Your know good n well everything about this rumor has been confirmed. /S Click to expand... Click to shrink... Sony is like the opposite of a platform holder "forcing" adoption, for better or worse.   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin chris 1515 said: No the consoles won't hold back yout 5090 because the game is created with hardware lumen, RT reflection, virtual shadows maps and Nanite plus Nanite vegetation in minds. Maybe Nanite character too in final version? If the game was made with software lumen as the base it would have holding back your 5090... Your PC will have much better IQ, framerate and better raytracing with Megalight(direct raytraced shadows) and better raytracing settings in general. Click to expand... Click to shrink... Exactly, the series s is not a "mistake" or holding any version of the game on console or even PC back, that's what I'm saying to the person I replied to, its stupid to say that.   cursed beef Member Jan 3, 2021 998 Have to imagine MS will lift the Series S parity clause when the next consoles launch. Which will be before/around the time W4 hits, right?   Alvis Saw the truth behind the copied door Member Oct 25, 2017 12,270 EU Chronos said: This better not be a Cyberpunk situation all over again. If they can't get it to work on S, then they may just need to abandon that console. Work out a deal with MS or wait for their next generation. Click to expand... Click to shrink... ? they said that 60 FPS on Series S is challenging, not the act of releasing the game there at all. The game can simply run at 30 FPS on Series S if they can't pull off 60 FPS. Or have a 40 FPS mode in lieu of 60 FPS. The CPU and storage speed differences between last gen and current gen were gigantic. This isn't even remotely close to a comparable situation.  defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin misqoute post   jroc74 Member Oct 27, 2017 34,465 defaltoption said: With that same attitude in this case you could say consoles are the mistake. You on your Series X or PS5 Pro are holding my 5090 back. Not so fun of a take anymore. Thats why its stupid. Click to expand... Click to shrink... Ah yes, clearly 5090 cards are the vast majority of the minimum requirements for PC games. How can anyone say this with a straight face anymore when there are now PC games running on a Steam Deck. At least ppl saying that about the Series S are comparing it to other consoles. That said, it is interesting they are focusing on consoles first, then PC. 
    0 Reacties 0 aandelen
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    WWW.MARKTECHPOST.COM
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Reacties 0 aandelen
  • For June’s Patch Tuesday, 68 fixes — and two zero-day flaws

    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilitieshave led to a “Patch Now” recommendation for both Windows and Office.To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates.Known issues

    Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern:

    Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” orare not supported in Excel filenames. An error is generated, advising the user to remove the offending characters.

    Windows 10: There are reports of blurry or unclear CJKtext when displayed at 96 DPIin Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity.

    Major revisions and mitigations

    Microsoft might have won an award for the shortest time between releasing an update and a revision with:

    CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release.

    Windows lifecycle and enforcement updates

    Microsoft did not release any enforcement updates for June.

    Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated.

    For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts.

    Core OS and UI compatibility

    Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations:

    Run print operations from 32-bit applications on 64-bit Windows environments.

    Use different print drivers and configurations.

    Observe printing from older productivity apps and virtual environments.

    Remote desktop and network connectivity

    This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed:

    Create and reconnect Remote Desktopsessions under varying network conditions.

    Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments.

    Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots.

    Filesystem, SMB and storage

    Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations:

    Access file shares using server names, FQDNs, and IP addresses.

    Enable and validate encrypted and compressed file-share operations between clients and servers.

    Run tests that create, open, and read from system log files using various file and storage configurations.

    Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes.

    Test disk addition/removal, failover behaviors, and resiliency settings.

    Run system-level storage diagnostics across active and passive nodes in the cluster.

    Windows installer and recovery

    Microsoft delivered another update to the Windows Installerapplication infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security:

    Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools.

    Validate restore point behavior for points older than 60 days under varying virtualization-based securitysettings.

    Check both client and server behaviors for allowed or blocked restores.

    We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected.

    Each month, we break down the update cycle into product familieswith the following basic groupings: 

    Browsers;

    Microsoft Windows;

    Microsoft Office;

    Microsoft Exchange and SQL Server; 

    Microsoft Developer Tools;

    And Adobe.

    Browsers

    Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patcheswhere both updates are rated important. These low-profile changes can be added to your standard release calendar.

    Microsoft Windows

    Microsoft released five critical patches and40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities:

    Missing release of memory after effective lifetime in Windows Cryptographic Servicesallows an unauthorized attacker to execute code over a network.

    Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network.

    Use after free in Windows KDC Proxy Serviceallows an unauthorized attacker to execute code over a network.

    Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network.

    Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates.

    Microsoft Office

    Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release.

    Microsoft Exchange and SQL Server

    There are no updates for either Microsoft Exchange or SQL Server this month. 

    Developer tools

    There were only three low-level updatesreleased, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule.

    AdobeAdobe has releaseda single update to Adobe Acrobat. There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above.
    #junes #patch #tuesday #fixes #two
    For June’s Patch Tuesday, 68 fixes — and two zero-day flaws
    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilitieshave led to a “Patch Now” recommendation for both Windows and Office.To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates.Known issues Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern: Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” orare not supported in Excel filenames. An error is generated, advising the user to remove the offending characters. Windows 10: There are reports of blurry or unclear CJKtext when displayed at 96 DPIin Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity. Major revisions and mitigations Microsoft might have won an award for the shortest time between releasing an update and a revision with: CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release. Windows lifecycle and enforcement updates Microsoft did not release any enforcement updates for June. Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated. For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts. Core OS and UI compatibility Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations: Run print operations from 32-bit applications on 64-bit Windows environments. Use different print drivers and configurations. Observe printing from older productivity apps and virtual environments. Remote desktop and network connectivity This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed: Create and reconnect Remote Desktopsessions under varying network conditions. Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments. Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots. Filesystem, SMB and storage Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations: Access file shares using server names, FQDNs, and IP addresses. Enable and validate encrypted and compressed file-share operations between clients and servers. Run tests that create, open, and read from system log files using various file and storage configurations. Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes. Test disk addition/removal, failover behaviors, and resiliency settings. Run system-level storage diagnostics across active and passive nodes in the cluster. Windows installer and recovery Microsoft delivered another update to the Windows Installerapplication infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security: Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools. Validate restore point behavior for points older than 60 days under varying virtualization-based securitysettings. Check both client and server behaviors for allowed or blocked restores. We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected. Each month, we break down the update cycle into product familieswith the following basic groupings:  Browsers; Microsoft Windows; Microsoft Office; Microsoft Exchange and SQL Server;  Microsoft Developer Tools; And Adobe. Browsers Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patcheswhere both updates are rated important. These low-profile changes can be added to your standard release calendar. Microsoft Windows Microsoft released five critical patches and40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities: Missing release of memory after effective lifetime in Windows Cryptographic Servicesallows an unauthorized attacker to execute code over a network. Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network. Use after free in Windows KDC Proxy Serviceallows an unauthorized attacker to execute code over a network. Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network. Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates. Microsoft Office Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release. Microsoft Exchange and SQL Server There are no updates for either Microsoft Exchange or SQL Server this month.  Developer tools There were only three low-level updatesreleased, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule. AdobeAdobe has releaseda single update to Adobe Acrobat. There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above. #junes #patch #tuesday #fixes #two
    WWW.COMPUTERWORLD.COM
    For June’s Patch Tuesday, 68 fixes — and two zero-day flaws
    Microsoft offered up a fairly light Patch Tuesday release this month, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. That said, two zero-day vulnerabilities (CVE-2025-33073 and CVE-2025-33053) have led to a “Patch Now” recommendation for both Windows and Office. (Developers can follow their usual release cadence with updates to Microsoft .NET and Visual Studio.) To help navigate these changes, the team from Readiness has provided auseful  infographic detailing the risks involved when deploying the latest updates. (More information about recent Patch Tuesday releases is available here.) Known issues Microsoft released a limited number of known issues for June, with a product-focused issue and a very minor display concern: Microsoft Excel: This a rare product level entry in the “known issues” category — an advisory that “square brackets” or [] are not supported in Excel filenames. An error is generated, advising the user to remove the offending characters. Windows 10: There are reports of blurry or unclear CJK (Chinese, Japanese, Korean) text when displayed at 96 DPI (100% scaling) in Chromium-based browsers such as Microsoft Edge and Google Chrome. This is a limited resource issue, as the font resolution in Windows 10 does not fully match the high-level resolution of the Noto font. Microsoft recommends changing the display scaling to 125% or 150% to improve clarity. Major revisions and mitigations Microsoft might have won an award for the shortest time between releasing an update and a revision with: CVE-2025-33073: Windows SMB Client Elevation of Privilege. Microsoft worked to address a vulnerability where improper access control in Windows SMB allows an attacker to elevate privileges over a network. This patch was revised on the same day as its initial release (and has been revised again for documentation purposes). Windows lifecycle and enforcement updates Microsoft did not release any enforcement updates for June. Each month, the Readiness team analyzes Microsoft’s latest updates and provides technically sound, actionable testing plans. While June’s release includes no stated functional changes, many foundational components across authentication, storage, networking, and user experience have been updated. For this testing guide, we grouped Microsoft’s updates by Windows feature and then accompanied the section with prescriptive test actions and rationale to help prioritize enterprise efforts. Core OS and UI compatibility Microsoft updated several core kernel drivers affecting Windows as a whole. This is a low-level system change and carries a high risk of compatibility and system issues. In addition, core Microsoft print libraries have been included in the update, requiring additional print testing in addition to the following recommendations: Run print operations from 32-bit applications on 64-bit Windows environments. Use different print drivers and configurations (e.g., local, networked). Observe printing from older productivity apps and virtual environments. Remote desktop and network connectivity This update could impact the reliability of remote access while broken DHCP-to-DNS integration can block device onboarding, and NAT misbehavior disrupts VPNs or site-to-site routing configurations. We recommend the following tests be performed: Create and reconnect Remote Desktop (RDP) sessions under varying network conditions. Confirm that DHCP-assigned IP addresses are correctly registered with DNS in AD-integrated environments. Test modifying NAT and routing settings in RRAS configurations and ensure that changes persist across reboots. Filesystem, SMB and storage Updates to the core Windows storage libraries affect nearly every command related to Microsoft Storage Spaces. A minor misalignment here can result in degraded clusters, orphaned volumes, or data loss in a failover scenario. These are high-priority components in modern data center and hybrid cloud infrastructure, with the following storage-related testing recommendations: Access file shares using server names, FQDNs, and IP addresses. Enable and validate encrypted and compressed file-share operations between clients and servers. Run tests that create, open, and read from system log files using various file and storage configurations. Validate core cluster storage management tasks, including creating and managing storage pools, tiers, and volumes. Test disk addition/removal, failover behaviors, and resiliency settings. Run system-level storage diagnostics across active and passive nodes in the cluster. Windows installer and recovery Microsoft delivered another update to the Windows Installer (MSI) application infrastructure. Broken or regressed Installer package MSI handling disrupts app deployment pipelines while putting core business applications at risk. We suggest the following tests for the latest changes to MSI Installer, Windows Recovery and Microsoft’s Virtualization Based Security (VBS): Perform installation, repair, and uninstallation of MSI Installer packages using standard enterprise deployment tools (e.g. Intune). Validate restore point behavior for points older than 60 days under varying virtualization-based security (VBS) settings. Check both client and server behaviors for allowed or blocked restores. We highly recommend prioritizing printer testing this month, then remote desktop deployment testing to ensure your core business applications install and uninstall as expected. Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings:  Browsers (Microsoft IE and Edge); Microsoft Windows (both desktop and server); Microsoft Office; Microsoft Exchange and SQL Server;  Microsoft Developer Tools (Visual Studio and .NET); And Adobe (if you get this far). Browsers Microsoft delivered a very minor series of updates to Microsoft Edge. The  browser receives two Chrome patches (CVE-2025-5068 and CVE-2025-5419) where both updates are rated important. These low-profile changes can be added to your standard release calendar. Microsoft Windows Microsoft released five critical patches and (a smaller than usual) 40 patches rated important. This month the five critical Windows patches cover the following desktop and server vulnerabilities: Missing release of memory after effective lifetime in Windows Cryptographic Services (WCS) allows an unauthorized attacker to execute code over a network. Use after free in Windows Remote Desktop Services allows an unauthorized attacker to execute code over a network. Use after free in Windows KDC Proxy Service (KPSSVC) allows an unauthorized attacker to execute code over a network. Use of uninitialized resources in Windows Netlogon allows an unauthorized attacker to elevate privileges over a network. Unfortunately, CVE-2025-33073 has been reported as publicly disclosed while CVE-2025-33053 has been reported as exploited. Given these two zero-days, the Readiness recommends a “Patch Now” release schedule for your Windows updates. Microsoft Office Microsoft released five critical updates and a further 13 rated important for Office. The critical patches deal with memory related and “use after free” memory allocation issues affecting the entire platform. Due to the number and severity of these issues, we recommend a “Patch Now” schedule for Office for this Patch Tuesday release. Microsoft Exchange and SQL Server There are no updates for either Microsoft Exchange or SQL Server this month.  Developer tools There were only three low-level updates (product focused and rated important) released, affecting .NET and Visual Studio. Add these updates to your standard developer release schedule. Adobe (and 3rd party updates) Adobe has released (but Microsoft has not co-published) a single update to Adobe Acrobat (APSB25-57). There were two other non-Microsoft updated releases affecting the Chromium platform, which were covered in the Browser section above.
    0 Reacties 0 aandelen