• Apple WWDC 2025: News and analysis

    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news.

    Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually.

    However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts.

    The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price.

    Follow this page for Computerworld‘s coverage of WWDC25.

    WWDC25 news and analysis

    Apple’s AI Revolution: Insights from WWDC

    June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job.

    For developers, Apple’s tools get a lot better for AI

    June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development.

    WWDC 25: What’s new for Apple and the enterprise?

    June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025.

    What we know so far about Apple’s Liquid Glass UI

    June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices. 

    WWDC first look: How Apple is improving its ecosystem

    June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten.

    Apple infuses AI into the Vision Pro

    June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences.

    WWDC: Apple is about to unlock international business

    June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    #apple #wwdc #news #analysis
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly.  #apple #wwdc #news #analysis
    WWW.COMPUTERWORLD.COM
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligence (AI) strategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AI (genAI) rollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models (LLM) such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conference (WWDC) might have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    Like
    Love
    Wow
    Angry
    Sad
    391
    2 Commenti 0 condivisioni
  • NVIDIA helps Germany lead Europe’s AI manufacturing race

    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #nvidia #helps #germany #lead #europes
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #nvidia #helps #germany #lead #europes
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory” (as they’re calling it) will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.(Photo by Maheshkumar Painam)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Commenti 0 condivisioni
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Commenti 0 condivisioni
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    WWW.MARKTECHPOST.COM
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Commenti 0 condivisioni
  • Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory

    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light

    Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask.
    Courtesy of the researchers via MIT

    In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred.
    Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work.
    In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light.
    The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt.

    Meet the engineer who invented an AI-powered way to restore art
    Watch on

    While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible.
    As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory.
    That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today.
    The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible.
    “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.”
    The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study.
    Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken.

    Overview of Physically-Applied Digital Restoration
    Watch on

    “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.”
    The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply.
    Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.”

    Get the latest stories in your inbox every weekday.
    #graduate #student #develops #aibased #approach
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday. #graduate #student #develops #aibased #approach
    WWW.SMITHSONIANMAG.COM
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory
    Graduate Student Develops an A.I.-Based Approach to Restore Time-Damaged Artwork to Its Former Glory The method could help bring countless old paintings, currently stored in the back rooms of galleries with limited conservation budgets, to light Scans of the painting retouched with a new technique during various stages in the process. On the right is the restored painting with the applied laminate mask. Courtesy of the researchers via MIT In a contest for jobs requiring the most patience, art restoration might take first place. Traditionally, conservators restore paintings by recreating the artwork’s exact colors to fill in the damage, one spot at a time. Even with the help of X-ray imaging and pigment analyses, several parts of the expensive process, such as the cleaning and retouching, are done by hand, as noted by Artnet’s Jo Lawson-Tancred. Now, a mechanical engineering graduate student at MIT has developed an artificial intelligence-based approach that can achieve a faithful restoration in just hours—instead of months of work. In a paper published Wednesday in the journal Nature, Alex Kachkine describes a new method that applies digital restorations to paintings by placing a thin film on top. If the approach becomes widespread, it could make art restoration more accessible and help bring countless damaged paintings, currently stored in the back rooms of galleries with limited conservation budgets, back to light. The new technique “is a restoration process that saves a lot of time and money, while also being reversible, which some people feel is really important to preserving the underlying character of a piece,” Kachkine tells Nature’s Amanda Heidt. Meet the engineer who invented an AI-powered way to restore art Watch on While filling in damaged areas of a painting would seem like a logical solution to many people, direct retouching raises ethical concerns for modern conservators. That’s because an artwork’s damage is part of its history, and retouching might detract from the painter’s original vision. “For example, instead of removing flaking paint and retouching the painting, a conservator might try to fix the loose paint particles to their original places,” writes Hartmut Kutzke, a chemist at the University of Oslo’s Museum of Cultural History, for Nature News and Views. If retouching is absolutely necessary, he adds, it should be reversible. As such, some institutions have started restoring artwork virtually and presenting the restoration next to the untouched, physical version. Many art lovers might argue, however, that a digital restoration printed out or displayed on a screen doesn’t quite compare to seeing the original painting in its full glory. That’s where Kachkine, who is also an art collector and amateur conservator, comes in. The MIT student has developed a way to apply digital restorations onto a damaged painting. In short, the approach involves using pre-existing A.I. tools to create a digital version of what the freshly painted artwork would have looked like. Based on this reconstruction, Kachkine’s new software assembles a map of the retouches, and their exact colors, necessary to fill the gaps present in the painting today. The map is then printed onto two layers of thin, transparent polymer film—one with colored retouches and one with the same pattern in white—that attach to the painting with conventional varnish. This “mask” aligns the retouches with the gaps while leaving the rest of the artwork visible. “In order to fully reproduce color, you need both white and color ink to get the full spectrum,” Kachkine explains in an MIT statement. “If those two layers are misaligned, that’s very easy to see. So, I also developed a few computational tools, based on what we know of human color perception, to determine how small of a region we can practically align and restore.” The method’s magic lies in the fact that the mask is removable, and the digital file provides a record of the modifications for future conservators to study. Kachkine demonstrated the approach on a 15th-century oil painting in dire need of restoration, by a Dutch artist whose name is now unknown. The retouches were generated by matching the surrounding color, replicating similar patterns visible elsewhere in the painting or copying the artist’s style in other paintings, per Nature News and Views. Overall, the painting’s 5,612 damaged regions were filled with 57,314 different colors in 3.5 hours—66 hours faster than traditional methods would have likely taken. Overview of Physically-Applied Digital Restoration Watch on “It followed years of effort to try to get the method working,” Kachkine tells the Guardian’s Ian Sample. “There was a fair bit of relief that finally this method was able to reconstruct and stitch together the surviving parts of the painting.” The new process still poses ethical considerations, such as whether the applied film disrupts the viewing experience or whether A.I.-generated corrections to the painting are accurate. Additionally, Kutzke writes for Nature News and Views that the effect of the varnish on the painting should be studied more deeply. Still, Kachkine says this technique could help address the large number of damaged artworks that live in storage rooms. “This approach grants greatly increased foresight and flexibility to conservators,” per the study, “enabling the restoration of countless damaged paintings deemed unworthy of high conservation budgets.” Get the latest stories in your inbox every weekday.
    0 Commenti 0 condivisioni