• IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Commentaires 0 Parts
  • Pay for Performance -- How Do You Measure It?

    More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story.  For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone.  Related:IT trainers face a somewhat different dilemma when it  comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects,  but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost,  managers of training should also consider pay for performance elements such as effort, skills and communication.  In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individualswho utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied?  Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.”  What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done. 
    #pay #performance #how #you #measure
    Pay for Performance -- How Do You Measure It?
    More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story.  For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone.  Related:IT trainers face a somewhat different dilemma when it  comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects,  but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost,  managers of training should also consider pay for performance elements such as effort, skills and communication.  In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individualswho utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied?  Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.”  What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done.  #pay #performance #how #you #measure
    WWW.INFORMATIONWEEK.COM
    Pay for Performance -- How Do You Measure It?
    More enterprises have moved to pay-for-performance salary and promotion models that measure progress toward goals -- but how do you measure goals for a maintenance programmer who barrels through a request backlog but delivers marginal value for the business, or for a business analyst whose success is predicated on forging intangibles like trust and cooperation with users so things can get done? It’s an age-old question facing companies, now that 77% of them use some type of pay-for-performance model. What are some popular pay-for-performance use cases? A factory doing piece work that pays employees based upon the number of items they assemble. A call center that pays agents based on how many calls they complete per day. A bank teller who gets rewarded for how many customers they sign up for credit cards. An IT project team that gets a bonus for completing a major project ahead of schedule. The IT example differs from the others, because it depends on team and not individual execution, but there nevertheless is something tangible to measure. The other use cases are more clearcut -- although they don’t account for pieces in the plant that were poorly assembled in haste to make quota and had to be reworked, or a call center agent who pushes calls off to someone else so they can end their calls in six minutes or less, or the teller who signs up X number of customers for credit cards, although two-thirds of them never use the credit card they signed up for. Related:In short, there are flaws in pay-for-performance models just as there are in other types of compensation models that organizations use. So, what’s the best path for IT for CIOs who want to implement pay for performance? One approach is to measure pay for performance based upon four key elements: hard results, effort, skill, and communications. The mix of these elements will vary, depending on the type of position each IT staff member performs. Here are two examples of pay per performance by position: 1. Computer maintenance programmers and help desk specialists Historically, IT departments have used hard numbers like how many open requests a computer maintenance programmer has closed, or how many calls a help desk employee has solved. There is merit in using hard results, and hard results should be factored into performance reviews for these individuals -- but hard numbers don’t tell the whole story.  For example, how many times has a help desk agent gone the extra mile with a difficult user or software bug, taking the time to see the entire process through until it is thoroughly solved? lf the issue was of a global nature, did the Help Desk agent follow up by letting others who use the application know that a bug was fixed? For the maintenance programmer who has completed the most open requests, which of these requests really solved a major business pain point? For both help desk and maintenance programming employees, were the changes and fixes properly documented and communicated to everyone with a need to know? And did these employees demonstrate the skills needed to solve their issues? Related:It’s difficult to capture hard results on elements like effort, communication and skills, but one way to go about it is to survey user departments on individual levels of service and effectiveness. From there, it’s up to IT managers to determinate the “mix” of hard results, effort, communication and skills on which the employee will be evaluated, and to communicate upfront to the employee what the pay for performance assessment will be based on. 2. Business analysts and trainers Business analysts and trainers are difficult to quantify in pay for performance models because so much of their success depends upon other people. A business analyst can know everything there is to know about a particular business area and its systems, but if the analyst is working with unresponsive users, or lacks the soft skills needed to communicate with users, the pay for performance can’t be based upon the technology skillset alone.  Related:IT trainers face a somewhat different dilemma when it  comes to performance evaluation: they can produce the training that new staff members need before staff is deployed on key projects,  but if a project gets delayed and this causes trainees to lose the knowledge that they learned, there is little the trainer can do aside from offering a refresher course. Can pay for performance be used for positions like these? It’s a mixed answer. Yes, pay per performance can be used for trainers, based upon how many individuals the trainer trains and how many new courses the trainer obtains or develops. These are the hard results. However, since so much of training’s execution depends upon other people downstream, like project managers who must start projects on time so new skills aren’t lost,  managers of training should also consider pay for performance elements such as effort (has the trainer consistently gone the extra mile to make things work?), skills and communication.  In sum, for both business analysts and trainers, there are hard results that can be factored into a pay for performance formula, but there is also a need to survey each position’s “customers” -- those individuals (and their managers) who utilized the business analyst’s or trainer’s skills and products to accomplish their respective objectives in projects and training. Were these user-customers satisfied?  Summary Remarks The value that IT employees contribute to overall IT and to the business at large is a combination of tangible and intangible results. Pay for performance models are well suited to gauge tangible outcomes, but they fall short when it comes to the intangibles that could be just as important. Many years ago, when Pat Riley was coaching the Los Angeles Lakers, an interviewer asked what type of metrics he used when he measured the effectiveness of individual players on the basketball court. Was it the number of points, rebounds, or assists? Riley said he used an “effort" index. For example, how many times did a player go up to get a rebound, even if he didn’t end up with the ball? Riley said the effort individual players exhibited mattered, because even if they didn’t get the rebound, they were creating situations so someone else on the team could. IT is similar. It’s why OKR International, a performance consultancy, stated “Intangibles often create or destroy value quietly -- until their impact is too big to ignore. In the long run, they are the unseen levers that determine whether strategy thrives or withers.”  What CIOs and IT leadership can do when they use pay for performance is to assure that hard results, effort, communications and skills are appropriately blended for each IT staff position, and its responsibilities and realities -- because you can’t attach a numerical measurement to everything -- but you can observe visible changes that begin to manifest when a business analyst turns around what has been a hostile relationship with a user department and you begin to get things done. 
    Like
    Love
    Wow
    Angry
    Sad
    166
    0 Commentaires 0 Parts
  • BYOD like it’s 2025

    Hard as it is to believe, there was a time when using any personal technology at work was such a radical concept that most people wouldn’t even consider it an option. IT departments went to great lengths to prevent workers from using their own devices, computers, apps/subscriptions, email, and cloud services.

    The release of the iPhone in 2007 began to change that. Suddenly people were discovering that the smartphone they bought for their personal use could make them more efficient and productive at work as well.

    But it was Apple’s launch of its mobile device management framework in 2010 that truly created the bring your own device movement. MDM meant that users could bring their personal devices to work, and IT departments could secure those devices as needed. Almost instantly, BYOD was something that companies began to support in industries across the board.

    Fifteen years later, BYOD is fully mainstream, and a majority of businesses actively support it. But advances in technology, changing user expectations, and the fallout from Covid’s remote work mandateshave shifted the landscape, sometimes without being overtly visible.

    With that in mind, I decided to reexamine the assumptions and realities of BYOD and see what has and hasn’t changed in the past decade and a half.

    BYOD is everywhere but device management isn’t

    The exact numbers on BYOD adoption vary depending on the source you look to and how it’s being measured. A 2022 paper from HPE claims that 90% of employees use a mix of work and personal devices on the job, while Cybersecurity Insiders says that 82% of organizations have a BYOD program. However you look at it, BYOD is now massively entrenched in our work culture and extends beyond just employees and managers. According to data from Samsung, 61% of organizations support BYOD for non-employees including contractors, partners, and suppliers to varying degrees.

    But overtly or tacitly accepting BYOD doesn’t mean that companies actively manage BYOD devices. Cybersecurity Insiders dataalso indicates that as many as 70% of BYOD devices used in the workplace aren’t managed — a number that may seem shocking, but that figure includes personal devices used by non-employees such as contractors.

    About those cost savings…

    In the early days, there was an assumption that BYOD would lower hardware and service costs, but that wasn’t certain. Today there’s data.

    In the early 2010s, Cisco estimated a + annual savings per employee, though more recent data from Samsungpegs the savings as significantly lower at Despite that disparity, it’s obvious that there are savings to be had, and with significantly climbing smartphone prices, those savings are is poised to grow rather than shrink.

    Of course, the cost of managing devices needs to be factored in. That cost can vary widely depending on the vendor, specific products, and adopted features, but some MDM vendors charge as little as per user per month. The cost of providing employees company-purchased apps is also worth noting, though that falls more in line with traditional software procurement.

    Productivity gains are real, but so are distractions

    The data is clear that there can be significant gains in productivity attached to BYOD. Samsung estimates that workers using their own devices can gain about an hour of productive worktime per day and Cybersecurity Insiders says that 68% of businesses see some degree of productivity increases.

    Although the gains are significant, personal devices can also distract workers more than company-owned devices, with personal notifications, social media accounts, news, and games being the major time-sink culprits. This has the potential to be a real issue, as these apps can become addictive and their use compulsive.

    Tools of the trade

    When I think back to the first five to ten years after Apple introduced MDM, it reminds me of the later stages of the birth of the solar system, with dozens of companies offering discrete tools that solved part of the mobility and BYOD puzzle, many colliding into each other or being flung out of existence. Some focused on just supporting the MDM server spec sheet, others on cloud storage, securing and managing access to corporate content, corporate app purchasing and management, secure connectivity, user and identity management, Office alternatives, and more.

    Along the way, major enterprise vendors began dominating the market, some by acquisition and others by building out existing capabilities, although there were also businesses that came out of mergers of some of the new players as well.

    As the market matured, it became easy to pick a single vendor to provide all enterprise mobility and BYOD needs rather than relying on multiple companies focusing on one particular requirement.

    Multiplatform support has morphed into something very different

    The iPhone was the clear early standard for supporting personal devices at work, in part because the hardware, operating system, and MDM mechanics were all created by a single vendor. Going multiplatform was typically assumed to mean iOS and Android — and Android was a fragmented mess of different hardware makers with sometimes widely varying devices and customized Android variantsthat resulted in no coherent OS update strategy.

    The gap in management capabilities has narrowed significantly since then, with Google taking a much more active role in courting and supporting enterprise customers and providing a clear and coherent enterprise strategy across a wide swath of major Android phone makers and other vendors.

    But that isn’t the only massive shift in what it means to be multiplatform. Today the personal devices used in the workplaceinclude non-phone entries including Macs, Apple TVs, Chromebooks, and Windows PCs — with Macs and PCs making up a significant number of BYOD devices.

    Most MDM suites support this full range of devices to one degree or another, but support costs can rise as more and more platformsare implemented — and those costs vary by platform, with general agreement that Apple devices provide the greatest savings when it comes to technical support.

    How Covid changed the BYOD equation

    I’m pretty sure that in 2010, not one person on the planet was predicting a global pandemic that would lead to the vast majority of knowledge workers working from home within a decade. Yet, as we all remember, that’s exactly what happened.

    The need to work from home encouraged broader adoption of personal devices as well as ancillary technologies ranging from peripherals/accessories to connectivity. Despite a litany of return-to-office mandates in recent years, remote work is here to stay, whether that’s full-time, hybrid, or just working outside traditional office hours or location.

    Samsung notes that 61% of businesses expect employees to work remotely to some degree, while Robert Half reports that only 61% of new job postings in 2024 had full in-office requirements. And data from WFH Research shows that at the start of 2025, employees are working remotely 28% of the time.

    Passing support to new generations

    One challenge for BYOD has always been user support and education. With two generations of digital natives now comprsing more than half the workforce, support and education needs have changed. Both millennials and Gen Z have grown up with the internet and mobile devices, which makes them more comfortable making technology decisions and troubleshooting problems than baby boomers and Gen X.

    This doesn’t mean that they don’t need tech support, but they do tend to need less hand-holding and don’t instinctively reach for the phone to access that support. Thus, there’s an ongoing shift to self-support resources and other, less time-intensive, models with text chat being the most common — be it with a person or a bot.

    They also have different expectations in areas like privacy, processes and policies, and work-life balance. Those expectations make it more important for companies to delineate their BYOD and other tech policies as well as to explain the rationale for them. This means that user education remains important, particularly in a rapidly changing landscape. It also means that policies should be communicated in more concise and easily digestible forms than large monolithic pages of legalese.

    Users actually want to updatetheir devices

    Twenty years ago, the idea of updating workplace technology was typically met with a groan from users who didn’t appreciate downtime or changes in the way things looked and worked. Even as BYOD gained traction, getting users to update their devices wasn’t always easy and required a certain amount of prompting or policing. While resistance to change will never truly die out, most smartphoneusers actively update on their own because of the new features that come with OS updates and new hardware. Upgrades are something to get excited about.

    BYOD users also tend to be more careful with their devices just because they are their own devices. Likewise, they’re more on point with repairs or replacements and are keen to handle those issues on their own.

    Security is ever evolving

    Security has always beena major concern when it comes to BYOD, and the threats will always be evolving. The biggest concerns stem from user behavior, with users losing devices being one big concern. Verizon reports that more than 90% of security incidents involving lost or stolen devices resulted in an unauthorized data breach, and 42% involved the leaking of internal data. Another big concern is users falling prey to malicious actors: falling for phishing schemes, downloading malware, allowing corporate data to be placed in public spaces, or letting others use their devices.

    Devices themselves can be major targets, with attacks coming from different directions like public Wi-Fi, malicious apps or apps that are not designed to safeguard data properly, OS and network vulnerabilities, and so on. Supporting infrastructure can also be a weak point.

    These threats are real. Research by JumpCloud indicates that 20% of businesses have seen malware as a result of unmanaged devices, and nearly half aren’t able to tell if unmanaged devices have compromised their security. Cybersecurity Insiders research shows a similar statistic of 22%, while also noting that 22% of BYOD devices have connected to malicious wireless networks.

    Shadow IT will always exist

    Shadow IT is a phenomenon that has existed for decades but grew rapidly alongside BYOD, when users began leveraging their personal devices, apps, and services for work without IT’s involvement, knowledge, or consent. Almost every company has some degree of shadow IT, and thus unmanaged devices or other technologies.

    Organizations need to educate usersabout security and keeping their devices safe. They also need to engage users involved in shadow IT and make allies out of them, because shadow IT often stems from unmet technological needs.

    Then there’s the trust component. Many users remain uncomfortable letting IT manage their devices, because they don’t understand what IT will be able to see on them. This is a user education problem that all companies need to address clearly and unequivocally.

    Still the same goals

    Although much has changed about BYOD, the basic goal remains the same: allowing workers to use the devices and other tools they are comfortable with and already own… and are likely to use whether sanctioned to or not.
    #byod #like #its
    BYOD like it’s 2025
    Hard as it is to believe, there was a time when using any personal technology at work was such a radical concept that most people wouldn’t even consider it an option. IT departments went to great lengths to prevent workers from using their own devices, computers, apps/subscriptions, email, and cloud services. The release of the iPhone in 2007 began to change that. Suddenly people were discovering that the smartphone they bought for their personal use could make them more efficient and productive at work as well. But it was Apple’s launch of its mobile device management framework in 2010 that truly created the bring your own device movement. MDM meant that users could bring their personal devices to work, and IT departments could secure those devices as needed. Almost instantly, BYOD was something that companies began to support in industries across the board. Fifteen years later, BYOD is fully mainstream, and a majority of businesses actively support it. But advances in technology, changing user expectations, and the fallout from Covid’s remote work mandateshave shifted the landscape, sometimes without being overtly visible. With that in mind, I decided to reexamine the assumptions and realities of BYOD and see what has and hasn’t changed in the past decade and a half. BYOD is everywhere but device management isn’t The exact numbers on BYOD adoption vary depending on the source you look to and how it’s being measured. A 2022 paper from HPE claims that 90% of employees use a mix of work and personal devices on the job, while Cybersecurity Insiders says that 82% of organizations have a BYOD program. However you look at it, BYOD is now massively entrenched in our work culture and extends beyond just employees and managers. According to data from Samsung, 61% of organizations support BYOD for non-employees including contractors, partners, and suppliers to varying degrees. But overtly or tacitly accepting BYOD doesn’t mean that companies actively manage BYOD devices. Cybersecurity Insiders dataalso indicates that as many as 70% of BYOD devices used in the workplace aren’t managed — a number that may seem shocking, but that figure includes personal devices used by non-employees such as contractors. About those cost savings… In the early days, there was an assumption that BYOD would lower hardware and service costs, but that wasn’t certain. Today there’s data. In the early 2010s, Cisco estimated a + annual savings per employee, though more recent data from Samsungpegs the savings as significantly lower at Despite that disparity, it’s obvious that there are savings to be had, and with significantly climbing smartphone prices, those savings are is poised to grow rather than shrink. Of course, the cost of managing devices needs to be factored in. That cost can vary widely depending on the vendor, specific products, and adopted features, but some MDM vendors charge as little as per user per month. The cost of providing employees company-purchased apps is also worth noting, though that falls more in line with traditional software procurement. Productivity gains are real, but so are distractions The data is clear that there can be significant gains in productivity attached to BYOD. Samsung estimates that workers using their own devices can gain about an hour of productive worktime per day and Cybersecurity Insiders says that 68% of businesses see some degree of productivity increases. Although the gains are significant, personal devices can also distract workers more than company-owned devices, with personal notifications, social media accounts, news, and games being the major time-sink culprits. This has the potential to be a real issue, as these apps can become addictive and their use compulsive. Tools of the trade When I think back to the first five to ten years after Apple introduced MDM, it reminds me of the later stages of the birth of the solar system, with dozens of companies offering discrete tools that solved part of the mobility and BYOD puzzle, many colliding into each other or being flung out of existence. Some focused on just supporting the MDM server spec sheet, others on cloud storage, securing and managing access to corporate content, corporate app purchasing and management, secure connectivity, user and identity management, Office alternatives, and more. Along the way, major enterprise vendors began dominating the market, some by acquisition and others by building out existing capabilities, although there were also businesses that came out of mergers of some of the new players as well. As the market matured, it became easy to pick a single vendor to provide all enterprise mobility and BYOD needs rather than relying on multiple companies focusing on one particular requirement. Multiplatform support has morphed into something very different The iPhone was the clear early standard for supporting personal devices at work, in part because the hardware, operating system, and MDM mechanics were all created by a single vendor. Going multiplatform was typically assumed to mean iOS and Android — and Android was a fragmented mess of different hardware makers with sometimes widely varying devices and customized Android variantsthat resulted in no coherent OS update strategy. The gap in management capabilities has narrowed significantly since then, with Google taking a much more active role in courting and supporting enterprise customers and providing a clear and coherent enterprise strategy across a wide swath of major Android phone makers and other vendors. But that isn’t the only massive shift in what it means to be multiplatform. Today the personal devices used in the workplaceinclude non-phone entries including Macs, Apple TVs, Chromebooks, and Windows PCs — with Macs and PCs making up a significant number of BYOD devices. Most MDM suites support this full range of devices to one degree or another, but support costs can rise as more and more platformsare implemented — and those costs vary by platform, with general agreement that Apple devices provide the greatest savings when it comes to technical support. How Covid changed the BYOD equation I’m pretty sure that in 2010, not one person on the planet was predicting a global pandemic that would lead to the vast majority of knowledge workers working from home within a decade. Yet, as we all remember, that’s exactly what happened. The need to work from home encouraged broader adoption of personal devices as well as ancillary technologies ranging from peripherals/accessories to connectivity. Despite a litany of return-to-office mandates in recent years, remote work is here to stay, whether that’s full-time, hybrid, or just working outside traditional office hours or location. Samsung notes that 61% of businesses expect employees to work remotely to some degree, while Robert Half reports that only 61% of new job postings in 2024 had full in-office requirements. And data from WFH Research shows that at the start of 2025, employees are working remotely 28% of the time. Passing support to new generations One challenge for BYOD has always been user support and education. With two generations of digital natives now comprsing more than half the workforce, support and education needs have changed. Both millennials and Gen Z have grown up with the internet and mobile devices, which makes them more comfortable making technology decisions and troubleshooting problems than baby boomers and Gen X. This doesn’t mean that they don’t need tech support, but they do tend to need less hand-holding and don’t instinctively reach for the phone to access that support. Thus, there’s an ongoing shift to self-support resources and other, less time-intensive, models with text chat being the most common — be it with a person or a bot. They also have different expectations in areas like privacy, processes and policies, and work-life balance. Those expectations make it more important for companies to delineate their BYOD and other tech policies as well as to explain the rationale for them. This means that user education remains important, particularly in a rapidly changing landscape. It also means that policies should be communicated in more concise and easily digestible forms than large monolithic pages of legalese. Users actually want to updatetheir devices Twenty years ago, the idea of updating workplace technology was typically met with a groan from users who didn’t appreciate downtime or changes in the way things looked and worked. Even as BYOD gained traction, getting users to update their devices wasn’t always easy and required a certain amount of prompting or policing. While resistance to change will never truly die out, most smartphoneusers actively update on their own because of the new features that come with OS updates and new hardware. Upgrades are something to get excited about. BYOD users also tend to be more careful with their devices just because they are their own devices. Likewise, they’re more on point with repairs or replacements and are keen to handle those issues on their own. Security is ever evolving Security has always beena major concern when it comes to BYOD, and the threats will always be evolving. The biggest concerns stem from user behavior, with users losing devices being one big concern. Verizon reports that more than 90% of security incidents involving lost or stolen devices resulted in an unauthorized data breach, and 42% involved the leaking of internal data. Another big concern is users falling prey to malicious actors: falling for phishing schemes, downloading malware, allowing corporate data to be placed in public spaces, or letting others use their devices. Devices themselves can be major targets, with attacks coming from different directions like public Wi-Fi, malicious apps or apps that are not designed to safeguard data properly, OS and network vulnerabilities, and so on. Supporting infrastructure can also be a weak point. These threats are real. Research by JumpCloud indicates that 20% of businesses have seen malware as a result of unmanaged devices, and nearly half aren’t able to tell if unmanaged devices have compromised their security. Cybersecurity Insiders research shows a similar statistic of 22%, while also noting that 22% of BYOD devices have connected to malicious wireless networks. Shadow IT will always exist Shadow IT is a phenomenon that has existed for decades but grew rapidly alongside BYOD, when users began leveraging their personal devices, apps, and services for work without IT’s involvement, knowledge, or consent. Almost every company has some degree of shadow IT, and thus unmanaged devices or other technologies. Organizations need to educate usersabout security and keeping their devices safe. They also need to engage users involved in shadow IT and make allies out of them, because shadow IT often stems from unmet technological needs. Then there’s the trust component. Many users remain uncomfortable letting IT manage their devices, because they don’t understand what IT will be able to see on them. This is a user education problem that all companies need to address clearly and unequivocally. Still the same goals Although much has changed about BYOD, the basic goal remains the same: allowing workers to use the devices and other tools they are comfortable with and already own… and are likely to use whether sanctioned to or not. #byod #like #its
    WWW.COMPUTERWORLD.COM
    BYOD like it’s 2025
    Hard as it is to believe, there was a time when using any personal technology at work was such a radical concept that most people wouldn’t even consider it an option. IT departments went to great lengths to prevent workers from using their own devices, computers, apps/subscriptions, email, and cloud services. The release of the iPhone in 2007 began to change that. Suddenly people were discovering that the smartphone they bought for their personal use could make them more efficient and productive at work as well. But it was Apple’s launch of its mobile device management framework in 2010 that truly created the bring your own device movement. MDM meant that users could bring their personal devices to work, and IT departments could secure those devices as needed. Almost instantly, BYOD was something that companies began to support in industries across the board. Fifteen years later, BYOD is fully mainstream, and a majority of businesses actively support it. But advances in technology, changing user expectations, and the fallout from Covid’s remote work mandates (and subsequent return to office mandates) have shifted the landscape, sometimes without being overtly visible. With that in mind, I decided to reexamine the assumptions and realities of BYOD and see what has and hasn’t changed in the past decade and a half. BYOD is everywhere but device management isn’t The exact numbers on BYOD adoption vary depending on the source you look to and how it’s being measured. A 2022 paper from HPE claims that 90% of employees use a mix of work and personal devices on the job, while Cybersecurity Insiders says that 82% of organizations have a BYOD program. However you look at it, BYOD is now massively entrenched in our work culture and extends beyond just employees and managers. According to data from Samsung (cited by JumpCloud), 61% of organizations support BYOD for non-employees including contractors, partners, and suppliers to varying degrees. But overtly or tacitly accepting BYOD doesn’t mean that companies actively manage BYOD devices. Cybersecurity Insiders data (also via JumpCloud) also indicates that as many as 70% of BYOD devices used in the workplace aren’t managed — a number that may seem shocking, but that figure includes personal devices used by non-employees such as contractors. About those cost savings… In the early days, there was an assumption that BYOD would lower hardware and service costs, but that wasn’t certain. Today there’s data. In the early 2010s, Cisco estimated a $900+ annual savings per employee, though more recent data from Samsung (cited by JumpCloud) pegs the savings as significantly lower at $341. Despite that disparity, it’s obvious that there are savings to be had, and with significantly climbing smartphone prices, those savings are is poised to grow rather than shrink. Of course, the cost of managing devices needs to be factored in. That cost can vary widely depending on the vendor, specific products, and adopted features, but some MDM vendors charge as little as $1 per user per month (not including staff resources). The cost of providing employees company-purchased apps is also worth noting, though that falls more in line with traditional software procurement. Productivity gains are real, but so are distractions The data is clear that there can be significant gains in productivity attached to BYOD. Samsung estimates that workers using their own devices can gain about an hour of productive worktime per day and Cybersecurity Insiders says that 68% of businesses see some degree of productivity increases. Although the gains are significant, personal devices can also distract workers more than company-owned devices, with personal notifications, social media accounts, news, and games being the major time-sink culprits. This has the potential to be a real issue, as these apps can become addictive and their use compulsive. Tools of the trade When I think back to the first five to ten years after Apple introduced MDM, it reminds me of the later stages of the birth of the solar system, with dozens of companies offering discrete tools that solved part of the mobility and BYOD puzzle, many colliding into each other or being flung out of existence. Some focused on just supporting the MDM server spec sheet, others on cloud storage, securing and managing access to corporate content, corporate app purchasing and management, secure connectivity, user and identity management, Office alternatives (Microsoft waited nearly five years releasing an iOS version of Office), and more. Along the way, major enterprise vendors began dominating the market, some by acquisition and others by building out existing capabilities, although there were also businesses that came out of mergers of some of the new players as well. As the market matured, it became easy to pick a single vendor to provide all enterprise mobility and BYOD needs rather than relying on multiple companies focusing on one particular requirement. Multiplatform support has morphed into something very different The iPhone was the clear early standard for supporting personal devices at work, in part because the hardware, operating system, and MDM mechanics were all created by a single vendor. Going multiplatform was typically assumed to mean iOS and Android — and Android was a fragmented mess of different hardware makers with sometimes widely varying devices and customized Android variants (built to spec by the manufacturers and the demands of wireless carriers) that resulted in no coherent OS update strategy. The gap in management capabilities has narrowed significantly since then, with Google taking a much more active role in courting and supporting enterprise customers and providing a clear and coherent enterprise strategy across a wide swath of major Android phone makers and other vendors. But that isn’t the only massive shift in what it means to be multiplatform. Today the personal devices used in the workplace (and able to be managed using MDM) include non-phone entries including Macs, Apple TVs, Chromebooks, and Windows PCs — with Macs and PCs making up a significant number of BYOD devices. Most MDM suites support this full range of devices to one degree or another, but support costs can rise as more and more platforms (and thus complexity) are implemented — and those costs vary by platform, with general agreement that Apple devices provide the greatest savings when it comes to technical support. How Covid changed the BYOD equation I’m pretty sure that in 2010, not one person on the planet was predicting a global pandemic that would lead to the vast majority of knowledge workers working from home within a decade. Yet, as we all remember, that’s exactly what happened. The need to work from home encouraged broader adoption of personal devices as well as ancillary technologies ranging from peripherals/accessories to connectivity. Despite a litany of return-to-office mandates in recent years, remote work is here to stay, whether that’s full-time, hybrid, or just working outside traditional office hours or location. Samsung notes that 61% of businesses expect employees to work remotely to some degree, while Robert Half reports that only 61% of new job postings in 2024 had full in-office requirements. And data from WFH Research shows that at the start of 2025, employees are working remotely 28% of the time. Passing support to new generations One challenge for BYOD has always been user support and education. With two generations of digital natives now comprsing more than half the workforce, support and education needs have changed. Both millennials and Gen Z have grown up with the internet and mobile devices, which makes them more comfortable making technology decisions and troubleshooting problems than baby boomers and Gen X. This doesn’t mean that they don’t need tech support, but they do tend to need less hand-holding and don’t instinctively reach for the phone to access that support. Thus, there’s an ongoing shift to self-support resources and other, less time-intensive, models with text chat being the most common — be it with a person or a bot. They also have different expectations in areas like privacy, processes and policies, and work-life balance. Those expectations make it more important for companies to delineate their BYOD and other tech policies as well as to explain the rationale for them. This means that user education remains important, particularly in a rapidly changing landscape. It also means that policies should be communicated in more concise and easily digestible forms than large monolithic pages of legalese. Users actually want to update (and repair or replace) their devices Twenty years ago, the idea of updating workplace technology was typically met with a groan from users who didn’t appreciate downtime or changes in the way things looked and worked. Even as BYOD gained traction, getting users to update their devices wasn’t always easy and required a certain amount of prompting or policing. While resistance to change will never truly die out, most smartphone (and other device) users actively update on their own because of the new features that come with OS updates and new hardware. Upgrades are something to get excited about. BYOD users also tend to be more careful with their devices just because they are their own devices. Likewise, they’re more on point with repairs or replacements and are keen to handle those issues on their own. Security is ever evolving Security has always been (and always will be) a major concern when it comes to BYOD, and the threats will always be evolving. The biggest concerns stem from user behavior, with users losing devices being one big concern. Verizon reports that more than 90% of security incidents involving lost or stolen devices resulted in an unauthorized data breach, and 42% involved the leaking of internal data. Another big concern is users falling prey to malicious actors: falling for phishing schemes, downloading malware, allowing corporate data to be placed in public spaces, or letting others use their devices. Devices themselves can be major targets, with attacks coming from different directions like public Wi-Fi, malicious apps or apps that are not designed to safeguard data properly, OS and network vulnerabilities, and so on. Supporting infrastructure can also be a weak point. These threats are real. Research by JumpCloud indicates that 20% of businesses have seen malware as a result of unmanaged devices, and nearly half aren’t able to tell if unmanaged devices have compromised their security. Cybersecurity Insiders research shows a similar statistic of 22%, while also noting that 22% of BYOD devices have connected to malicious wireless networks. Shadow IT will always exist Shadow IT is a phenomenon that has existed for decades but grew rapidly alongside BYOD, when users began leveraging their personal devices, apps, and services for work without IT’s involvement, knowledge, or consent. Almost every company has some degree of shadow IT, and thus unmanaged devices or other technologies. Organizations need to educate users (even digital natives) about security and keeping their devices safe. They also need to engage users involved in shadow IT and make allies out of them, because shadow IT often stems from unmet technological needs. Then there’s the trust component. Many users remain uncomfortable letting IT manage their devices, because they don’t understand what IT will be able to see on them. This is a user education problem that all companies need to address clearly and unequivocally. Still the same goals Although much has changed about BYOD, the basic goal remains the same: allowing workers to use the devices and other tools they are comfortable with and already own… and are likely to use whether sanctioned to or not.
    0 Commentaires 0 Parts
  • New Claude 4 AI model refactored code for 7 hours straight

    No sleep till Brooklyn

    New Claude 4 AI model refactored code for 7 hours straight

    Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours.

    Benj Edwards



    May 22, 2025 12:45 pm

    |

    4

    The Claude 4 logo, created by Anthropic.

    Credit:

    Anthropic

    The Claude 4 logo, created by Anthropic.

    Credit:

    Anthropic

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours.
    Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly."
    Before we go further, a brief refresher on Claude's three AI model "size" namesis probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price, speed, and capability.
    Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth"and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequentlythan larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks.

    A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown.

    Credit:

    Anthropic

    There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors.

    In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validatedcapabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release.
    Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered"vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below.
    To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions.
    Extended thinking meets tool use
    Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response.
    "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming.

    General Claude 4 benchmark results, provided by Anthropic.

    Credit:

    Anthropic

    In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs.

    “The world’s best coding model”
    Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files."
    In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoftsuggests Anthropic has built something genuinely competitive.

    Software engineering benchmark results, provided by Anthropic.

    Credit:

    Anthropic

    Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours.
    When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part."

    Pricing and availability
    Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs per million tokens for input and per million for output, while Sonnet 4 remains at and The models offer two response modes: traditional LLM and simulated reasoningfor complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild.
    Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription.
    The Claude 4 models also debut Claude Codeas a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework.

    A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game.

    Credit:

    Anthropic

    Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said.
    "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well."

    Benj Edwards
    Senior AI Reporter

    Benj Edwards
    Senior AI Reporter

    Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

    4 Comments
    #new #claude #model #refactored #code
    New Claude 4 AI model refactored code for 7 hours straight
    No sleep till Brooklyn New Claude 4 AI model refactored code for 7 hours straight Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours. Benj Edwards – May 22, 2025 12:45 pm | 4 The Claude 4 logo, created by Anthropic. Credit: Anthropic The Claude 4 logo, created by Anthropic. Credit: Anthropic Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours. Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly." Before we go further, a brief refresher on Claude's three AI model "size" namesis probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price, speed, and capability. Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth"and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequentlythan larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks. A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors. In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validatedcapabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release. Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered"vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below. To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions. Extended thinking meets tool use Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response. "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming. General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs. “The world’s best coding model” Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files." In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoftsuggests Anthropic has built something genuinely competitive. Software engineering benchmark results, provided by Anthropic. Credit: Anthropic Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours. When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part." Pricing and availability Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs per million tokens for input and per million for output, while Sonnet 4 remains at and The models offer two response modes: traditional LLM and simulated reasoningfor complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild. Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription. The Claude 4 models also debut Claude Codeas a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework. A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said. "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well." Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 4 Comments #new #claude #model #refactored #code
    ARSTECHNICA.COM
    New Claude 4 AI model refactored code for 7 hours straight
    No sleep till Brooklyn New Claude 4 AI model refactored code for 7 hours straight Anthropic says Claude 4 beats Gemini on coding benchmarks; works autonomously for hours. Benj Edwards – May 22, 2025 12:45 pm | 4 The Claude 4 logo, created by Anthropic. Credit: Anthropic The Claude 4 logo, created by Anthropic. Credit: Anthropic Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more On Thursday, Anthropic released Claude Opus 4 and Claude Sonnet 4, marking the company's return to larger model releases after primarily focusing on mid-range Sonnet variants since June of last year. The new models represent what the company calls its most capable coding models yet, with Opus 4 designed for complex, long-running tasks that can operate autonomously for hours. Alex Albert, Anthropic's head of Claude Relations, told Ars Technica that the company chose to revive the Opus line because of growing demand for agentic AI applications. "Across all the companies out there that are building things, there's a really large wave of these agentic applications springing up, and a very high demand and premium being placed on intelligence," Albert said. "I think Opus is going to fit that groove perfectly." Before we go further, a brief refresher on Claude's three AI model "size" names (first introduced in March 2024) is probably warranted. Haiku, Sonnet, and Opus offer a tradeoff between price (in the API), speed, and capability. Haiku models are the smallest, least expensive to run, and least capable in terms of what you might call "context depth" (considering conceptual relationships in the prompt) and encoded knowledge. Owing to the small size in parameter count, Haiku models retain fewer concrete facts and thus tend to confabulate more frequently (plausibly answering questions based on lack of data) than larger models, but they are much faster at basic tasks than larger models. Sonnet is traditionally a mid-range model that hits a balance between cost and capability, and Opus models have always been the largest and slowest to run. However, Opus models process context more deeply and are hypothetically better suited for running deep logical tasks. A screenshot of the Claude web interface with Opus 4 and Sonnet 4 options shown. Credit: Anthropic There is no Claude 4 Haiku just yet, but the new Sonnet and Opus models can reportedly handle tasks that previous versions could not. In our interview with Albert, he described testing scenarios where Opus 4 worked coherently for up to 24 hours on tasks like playing Pokémon while coding refactoring tasks in Claude Code ran for seven hours without interruption. Earlier Claude models typically lasted only one to two hours before losing coherence, Albert said, meaning that the models could only produce useful self-referencing outputs for that long before beginning to output too many errors. In particular, that marathon refactoring claim reportedly comes from Rakuten, a Japanese tech services conglomerate that "validated [Claude's] capabilities with a demanding open-source refactor running independently for 7 hours with sustained performance," Anthropic said in a news release. Whether you'd want to leave an AI model unsupervised for that long is another question entirely because even the most capable AI models can introduce subtle bugs, go down unproductive rabbit holes, or make choices that seem logical to the model but miss important context that a human developer would catch. While many people now use Claude for easy-going vibe coding, as we covered in March, the human-powered (and ironically-named) "vibe debugging" that often results from long AI coding sessions is also a very real thing. More on that below. To shore up some of those shortcomings, Anthropic built memory capabilities into both new Claude 4 models, allowing them to maintain external files for storing key information across long sessions. When developers provide access to local files, the models can create and update "memory files" to track progress and things they deem important over time. Albert compared this to how humans take notes during extended work sessions. Extended thinking meets tool use Both Claude 4 models introduce what Anthropic calls "extended thinking with tool use," a new beta feature allowing the models to alternate between simulated reasoning and using external tools like web search, similar to what OpenAI's o3 and 04-mini-high AI models currently do in ChatGPT. While Claude 3.7 Sonnet already had strong tool use capabilities, the new models can now interleave simulated reasoning and tool calling in a single response. "So now we can actually think, call a tool process, the results, think some more, call another tool, and repeat until it gets to a final answer," Albert explained to Ars. The models self-determine when they have reached a useful conclusion, a capability picked up through training rather than governed by explicit human programming. General Claude 4 benchmark results, provided by Anthropic. Credit: Anthropic In practice, we've anecdotally found parallel tool use capability very useful in AI assistants like OpenAI o3, since they don't have to rely on what is trained in their neural network to provide accurate answers. Instead, these more agentic models can iteratively search the web, parse the results, analyze images, and spin up coding tasks for analysis in ways that can avoid falling into a confabulation trap by relying solely on pure LLM outputs. “The world’s best coding model” Anthropic says Opus 4 leads industry benchmarks for coding tasks, achieving 72.5 percent on SWE-bench and 43.2 percent on Terminal-bench, calling it "the world's best coding model." According to Anthropic, companies using early versions report improvements. Cursor described it as "state-of-the-art for coding and a leap forward in complex codebase understanding," while Replit noted "improved precision and dramatic advancements for complex changes across multiple files." In fact, GitHub announced it will use Sonnet 4 as the base model for its new coding agent in GitHub Copilot, citing the model's performance in "agentic scenarios" in Anthropic's news release. Sonnet 4 scored 72.7 percent on SWE-bench while maintaining faster response times than Opus 4. The fact that GitHub is betting on Claude rather than a model from its parent company Microsoft (which has close ties to OpenAI) suggests Anthropic has built something genuinely competitive. Software engineering benchmark results, provided by Anthropic. Credit: Anthropic Anthropic says it has addressed a persistent issue with Claude 3.7 Sonnet in which users complained that the model would take unauthorized actions or provide excessive output. Albert said the company reduced this "reward hacking behavior" by approximately 80 percent in the new models through training adjustments. An 80 percent reduction in unwanted behavior sounds impressive, but that also suggests that 20 percent of the problem behavior remains—a big concern when we're talking about AI models that might be performing autonomous tasks for hours. When we asked about code accuracy, Albert said that human code review is still an important part of shipping any production code. "There's a human parallel, right? So this is just a problem we've had to deal with throughout the whole nature of software engineering. And this is why the code review process exists, so that you can catch these things. We don't anticipate that going away with models either," Albert said. "If anything, the human review will become more important, and more of your job as developer will be in this review than it will be in the generation part." Pricing and availability Both Claude 4 models maintain the same pricing structure as their predecessors: Opus 4 costs $15 per million tokens for input and $75 per million for output, while Sonnet 4 remains at $3 and $15. The models offer two response modes: traditional LLM and simulated reasoning ("extended thinking") for complex problems. Given that some Claude Code sessions can apparently run for hours, those per-token costs will likely add up very quickly for users who let the models run wild. Anthropic made both models available through its API, Amazon Bedrock, and Google Cloud Vertex AI. Sonnet 4 remains accessible to free users, while Opus 4 requires a paid subscription. The Claude 4 models also debut Claude Code (first introduced in February) as a generally available product after months of preview testing. Anthropic says the coding environment now integrates with VS Code and JetBrains IDEs, showing proposed edits directly in files. A new SDK allows developers to build custom agents using the same framework. A screenshot of "Claude Plays Pokemon," a custom application where Claude 4 attempts to beat the classic Game Boy game. Credit: Anthropic Even with Anthropic's future riding on the capability of these new models, when we asked about how they guide Claude's behavior by fine-tuning, Albert acknowledged that the inherent unpredictability of these systems presents ongoing challenges for both them and developers. "In the realm and the world of software for the past 40, 50 years, we've been running on deterministic systems, and now all of a sudden, it's non-deterministic, and that changes how we build," he said. "I empathize with a lot of people out there trying to use our APIs and language models generally because they have to almost shift their perspective on what it means for reliability, what it means for powering a core of your application in a non-deterministic way," Albert added. "These are general oddities that have kind of just been flipped, and it definitely makes things more difficult, but I think it opens up a lot of possibilities as well." Benj Edwards Senior AI Reporter Benj Edwards Senior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 4 Comments
    0 Commentaires 0 Parts
  • What I learned from my first few months with a Bambu Lab A1 3D printer, part 1

    to 3d or not to 3d

    What I learned from my first few months with a Bambu Lab A1 3D printer, part 1

    One neophyte's first steps into the wide world of 3D printing.

    Andrew Cunningham



    May 22, 2025 7:30 am

    |

    21

    The hotend on my Bambu Lab A1 3D printer.

    Credit:

    Andrew Cunningham

    The hotend on my Bambu Lab A1 3D printer.

    Credit:

    Andrew Cunningham

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    For a couple of years now, I've been trying to find an excuse to buy a decent 3D printer.
    Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks' worth of plastic filament.
    But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present.
    Since then, I've been tinkering with the thing nearly daily, learning more about what I've gotten myself into and continuing to find fun and useful things to print. I've gathered a bunch of thoughts about my learning process here, not because I think I'm breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. "Hyperfixating on new hobbies" is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming.
    Getting to know my printer
    My wife settled on the Bambu A1 because it's a larger version of the A1 Mini, Wirecutter's main 3D printer pick at the time. Other reviews she read noted that it's beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far.
    Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I'm already leaning mostly on the first-party tools and built-in functionality to get everything going, so I'm not really experiencing the sense of having "lost" features I was relying on, and any concerns I did have are mostly addressed by Bambu's update about its update.

    I hadn't really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu's printersare capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints.

    Basic terminology

    Extrusion-based 3D printerswork by depositing multiple thin layers of melted plastic filament on a heated bed.

    Credit:

    Andrew Cunningham

    First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plasticand then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick.
    The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography”. You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints. Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible.
    There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a "bed slinger." In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to "move" the print head on the Y axis.

    More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are "CoreXY" printers, which include a third rail or set of railsthat allow the print head to travel in all three directions.
    The A1 is also an "open-bed" printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer.
    Together, the downsides of a bed-slingerand an open-bed printermainly just mean that the A1 isn't well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it.
    Setting up
    Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It's not quite the same as the "take it out of the box, remove all the plastic film, and plug it in" process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that's roughly the level of complexity we're talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple.

    The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer's calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. "Stable enough to put a lamp on" is not the same as "stable enough to put a constantly wobbling contraption" on—obvious in retrospect, but my being new to this is why this article exists.
    After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer's every motion to our kid's room below, a boon for when I'm trying to print something after he has gone to bed.
    The first-party Bambu apps for sending files to the printer are Bambu Handyand Bambu Studio. Handy works OK for sending ready-made models from MakerWorldand for monitoring prints once they've started. But I'll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing.

    Bambu Studio: A primer
    Bambu Studio is what's known in the hobby as a "slicer," software that takes existing 3D models output by common CAD programsand converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it's primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects.

    Bambu Studio isn't the most approachable application, but if you've made it this far, it shouldn't be totally beyond your comprehension. For first-time setup, you'll choose your model of printer, leave the filament settings as they are, and sign in if you want to use Bambu's cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious.
    For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you've downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click "slice plate," and then click "print." Things like the default 0.4 mm nozzle size and Bambu's included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time.
    When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the "total filament" figure, which tells you how many grams of filament the printer will use to make your model. The second is the "total time" figure, which tells you how long the entire print will take from the first calibration steps to the end of the job.

    Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing.

    Andrew Cunningham

    Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing.

    Andrew Cunningham

    The main way to tweak print quality is to adjust the height of the layers that the A1 lays down.

    Andrew Cunningham

    The main way to tweak print quality is to adjust the height of the layers that the A1 lays down.

    Andrew Cunningham

    Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament.

    Andrew Cunningham

    Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament.

    Andrew Cunningham

    The main way to tweak print quality is to adjust the height of the layers that the A1 lays down.

    Andrew Cunningham

    Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament.

    Andrew Cunningham

    For some prints, scaling them up or down a bit can make them fit your needs better.

    Andrew Cunningham

    For items that are small enough, you can print a few at once using the clone function. For filaments with a gradient, this also makes the gradient effect more pronounced.

    Andrew Cunningham

    Bambu Studio estimates the amount of filament you'll use and the amount of time a print will take. Filament usually comes in 1 kg spools.

    Andrew Cunningham

    When selecting filament, people who stick to Bambu's first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I've had almost zero trouble with the "generic" presets and the spools of generic Inland-branded filament I've bought from our local Micro Center, at least when sticking to PLA. But we'll dive deeper into plastics in part 2 of this series.

    I won't pretend I'm skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I've found most useful:

    The "clone" function, accessed by right-clicking an object and clicking "clone." Useful if you'd like to fit several copies of an object on the build plate at once, especially if you're using a filament with a color gradient and you'd like to make the gradient effect more pronounced by spreading it out over a bunch of prints.
    The "arrange all objects" function, the fourth button from the left under the "prepare" tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn't need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space.
    Layer height, located in the sidebar directly beneath "Process". Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail.
    Infill percentage and wall loops, located in the Strength tab beneath the "Process" sidebar item. For most everyday prints, you don't need to worry about messing with these settings much; the infill percentage determines the amount of your print's interior that's plastic and the part that's empty space. The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it.

    My first prints

    A humble start: My very first print was a wall bracket for the remote for my office's ceiling fan.

    Credit:

    Andrew Cunningham

    When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk.
    When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control.
    MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it's not a terrible idea, you can usually find someone out there who has made something close to what you're looking for.
    I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that's all it took to kick the printer into action.
    After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object.

    Print No. 2 was another wall bracket, this time for my gaming PC's gamepad and headset.

    Credit:

    Andrew Cunningham

    It wears off a bit after you successfully execute a print, but I still haven't quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office.
    The remote holder was, as I'd learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn.
    This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I'll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I've learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1.

    Andrew Cunningham
    Senior Technology Reporter

    Andrew Cunningham
    Senior Technology Reporter

    Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

    21 Comments
    #what #learned #first #few #months
    What I learned from my first few months with a Bambu Lab A1 3D printer, part 1
    to 3d or not to 3d What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 One neophyte's first steps into the wide world of 3D printing. Andrew Cunningham – May 22, 2025 7:30 am | 21 The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more For a couple of years now, I've been trying to find an excuse to buy a decent 3D printer. Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks' worth of plastic filament. But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present. Since then, I've been tinkering with the thing nearly daily, learning more about what I've gotten myself into and continuing to find fun and useful things to print. I've gathered a bunch of thoughts about my learning process here, not because I think I'm breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. "Hyperfixating on new hobbies" is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming. Getting to know my printer My wife settled on the Bambu A1 because it's a larger version of the A1 Mini, Wirecutter's main 3D printer pick at the time. Other reviews she read noted that it's beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far. Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I'm already leaning mostly on the first-party tools and built-in functionality to get everything going, so I'm not really experiencing the sense of having "lost" features I was relying on, and any concerns I did have are mostly addressed by Bambu's update about its update. I hadn't really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu's printersare capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints. Basic terminology Extrusion-based 3D printerswork by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plasticand then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick. The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography”. You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints. Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible. There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a "bed slinger." In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to "move" the print head on the Y axis. More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are "CoreXY" printers, which include a third rail or set of railsthat allow the print head to travel in all three directions. The A1 is also an "open-bed" printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer. Together, the downsides of a bed-slingerand an open-bed printermainly just mean that the A1 isn't well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it. Setting up Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It's not quite the same as the "take it out of the box, remove all the plastic film, and plug it in" process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that's roughly the level of complexity we're talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple. The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer's calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. "Stable enough to put a lamp on" is not the same as "stable enough to put a constantly wobbling contraption" on—obvious in retrospect, but my being new to this is why this article exists. After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer's every motion to our kid's room below, a boon for when I'm trying to print something after he has gone to bed. The first-party Bambu apps for sending files to the printer are Bambu Handyand Bambu Studio. Handy works OK for sending ready-made models from MakerWorldand for monitoring prints once they've started. But I'll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing. Bambu Studio: A primer Bambu Studio is what's known in the hobby as a "slicer," software that takes existing 3D models output by common CAD programsand converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it's primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects. Bambu Studio isn't the most approachable application, but if you've made it this far, it shouldn't be totally beyond your comprehension. For first-time setup, you'll choose your model of printer, leave the filament settings as they are, and sign in if you want to use Bambu's cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious. For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you've downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click "slice plate," and then click "print." Things like the default 0.4 mm nozzle size and Bambu's included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time. When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the "total filament" figure, which tells you how many grams of filament the printer will use to make your model. The second is the "total time" figure, which tells you how long the entire print will take from the first calibration steps to the end of the job. Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing. Andrew Cunningham Selecting your filament and/or temperature presets. If you have the Automatic Material System, this is also where you'll manage multicolor printing. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham For some prints, scaling them up or down a bit can make them fit your needs better. Andrew Cunningham For items that are small enough, you can print a few at once using the clone function. For filaments with a gradient, this also makes the gradient effect more pronounced. Andrew Cunningham Bambu Studio estimates the amount of filament you'll use and the amount of time a print will take. Filament usually comes in 1 kg spools. Andrew Cunningham When selecting filament, people who stick to Bambu's first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I've had almost zero trouble with the "generic" presets and the spools of generic Inland-branded filament I've bought from our local Micro Center, at least when sticking to PLA. But we'll dive deeper into plastics in part 2 of this series. I won't pretend I'm skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I've found most useful: The "clone" function, accessed by right-clicking an object and clicking "clone." Useful if you'd like to fit several copies of an object on the build plate at once, especially if you're using a filament with a color gradient and you'd like to make the gradient effect more pronounced by spreading it out over a bunch of prints. The "arrange all objects" function, the fourth button from the left under the "prepare" tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn't need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space. Layer height, located in the sidebar directly beneath "Process". Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail. Infill percentage and wall loops, located in the Strength tab beneath the "Process" sidebar item. For most everyday prints, you don't need to worry about messing with these settings much; the infill percentage determines the amount of your print's interior that's plastic and the part that's empty space. The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it. My first prints A humble start: My very first print was a wall bracket for the remote for my office's ceiling fan. Credit: Andrew Cunningham When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk. When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control. MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it's not a terrible idea, you can usually find someone out there who has made something close to what you're looking for. I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that's all it took to kick the printer into action. After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object. Print No. 2 was another wall bracket, this time for my gaming PC's gamepad and headset. Credit: Andrew Cunningham It wears off a bit after you successfully execute a print, but I still haven't quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office. The remote holder was, as I'd learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn. This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I'll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I've learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 21 Comments #what #learned #first #few #months
    ARSTECHNICA.COM
    What I learned from my first few months with a Bambu Lab A1 3D printer, part 1
    to 3d or not to 3d What I learned from my first few months with a Bambu Lab A1 3D printer, part 1 One neophyte's first steps into the wide world of 3D printing. Andrew Cunningham – May 22, 2025 7:30 am | 21 The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham The hotend on my Bambu Lab A1 3D printer. Credit: Andrew Cunningham Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more For a couple of years now, I've been trying to find an excuse to buy a decent 3D printer. Friends and fellow Ars staffers who had them would gush about them at every opportunity, talking about how useful they can be and how much can be printed once you get used to the idea of being able to create real, tangible objects with a little time and a few bucks' worth of plastic filament. But I could never quite imagine myself using one consistently enough to buy one. Then, this past Christmas, my wife forced the issue by getting me a Bambu Lab A1 as a present. Since then, I've been tinkering with the thing nearly daily, learning more about what I've gotten myself into and continuing to find fun and useful things to print. I've gathered a bunch of thoughts about my learning process here, not because I think I'm breaking new ground but to serve as a blueprint for anyone who has been on the fence about Getting Into 3D Printing. "Hyperfixating on new hobbies" is one of my go-to coping mechanisms during times of stress and anxiety, and 3D printing has turned out to be the perfect combination of fun, practical, and time-consuming. Getting to know my printer My wife settled on the Bambu A1 because it's a larger version of the A1 Mini, Wirecutter's main 3D printer pick at the time (she also noted it was "hella on sale"). Other reviews she read noted that it's beginner-friendly, easy to use, and fun to tinker with, and it has a pretty active community for answering questions, all assessments I agree with so far. Note that this research was done some months before Bambu earned bad headlines because of firmware updates that some users believe will lead to a more locked-down ecosystem. This is a controversy I understand—3D printers are still primarily the realm of DIYers and tinkerers, people who are especially sensitive to the closing of open ecosystems. But as a beginner, I'm already leaning mostly on the first-party tools and built-in functionality to get everything going, so I'm not really experiencing the sense of having "lost" features I was relying on, and any concerns I did have are mostly addressed by Bambu's update about its update. I hadn't really updated my preconceived notions of what home 3D printing was since its primordial days, something Ars has been around long enough to have covered in some depth. I was wary of getting into yet another hobby where, like building your own gaming PC, fiddling with and maintaining the equipment is part of the hobby. Bambu's printers (and those like them) are capable of turning out fairly high-quality prints with minimal fuss, and nothing will draw you into the hobby faster than a few successful prints. Basic terminology Extrusion-based 3D printers (also sometimes called "FDM," for "fused deposition modeling") work by depositing multiple thin layers of melted plastic filament on a heated bed. Credit: Andrew Cunningham First things first: The A1 is what’s called an “extrusion” printer, meaning that it functions by melting a long, slim thread of plastic (filament) and then depositing this plastic onto a build plate seated on top of a heated bed in tens, hundreds, or even thousands of thin layers. In the manufacturing world, this is also called “fused deposition modeling,” or FDM. This layer-based extrusion gives 3D-printed objects their distinct ridged look and feel and is also why a 3D printed piece of plastic is less detailed-looking and weaker than an injection-molded piece of plastic like a Lego brick. The other readily available home 3D printing technology takes liquid resin and uses UV light to harden it into a plastic structure, using a process called “stereolithography” (SLA). You can get inexpensive resin printers in the same price range as the best cheap extrusion printers, and the SLA process can create much more detailed, smooth-looking, and watertight 3D prints (it’s popular for making figurines for tabletop games). Some downsides are that the print beds in these printers are smaller, resin is a bit fussier than filament, and multi-color printing isn’t possible. There are two main types of home extrusion printers. The Bambu A1 is a Cartesian printer, or in more evocative and colloquial terms, a "bed slinger." In these, the head of the printer can move up and down on one or two rails and from side to side on another rail. But the print bed itself has to move forward and backward to "move" the print head on the Y axis. More expensive home 3D printers, including higher-end Bambu models in the P- and X-series, are "CoreXY" printers, which include a third rail or set of rails (and more Z-axis rails) that allow the print head to travel in all three directions. The A1 is also an "open-bed" printer, which means that it ships without an enclosure. Closed-bed printers are more expensive, but they can maintain a more consistent temperature inside and help contain the fumes from the melted plastic. They can also reduce the amount of noise coming from your printer. Together, the downsides of a bed-slinger (introducing more wobble for tall prints, more opportunities for parts of your print to come loose from the plate) and an open-bed printer (worse temperature, fume, and dust control) mainly just mean that the A1 isn't well-suited for printing certain types of plastic and has more potential points of failure for large or delicate prints. My experience with the A1 has been mostly positive now that I know about those limitations, but the printer you buy could easily change based on what kinds of things you want to print with it. Setting up Overall, the setup process was reasonably simple, at least for someone who has been building PCs and repairing small electronics for years now. It's not quite the same as the "take it out of the box, remove all the plastic film, and plug it in" process of setting up a 2D printer, but the directions in the start guide are well-illustrated and clearly written; if you can put together prefab IKEA furniture, that's roughly the level of complexity we're talking about here. The fact that delicate electronics are involved might still make it more intimidating for the non-technical, but figuring out what goes where is fairly simple. The only mistake I made while setting the printer up involved the surface I initially tried to put it on. I used a spare end table, but as I discovered during the printer's calibration process, the herky-jerky movement of the bed and print head was way too much for a little table to handle. "Stable enough to put a lamp on" is not the same as "stable enough to put a constantly wobbling contraption" on—obvious in retrospect, but my being new to this is why this article exists. After some office rearrangement, I was able to move the printer to my sturdy L-desk full of cables and other doodads to serve as ballast. This surface was more than sturdy enough to let the printer complete its calibration process—and sturdy enough not to transfer the printer's every motion to our kid's room below, a boon for when I'm trying to print something after he has gone to bed. The first-party Bambu apps for sending files to the printer are Bambu Handy (for iOS/Android, with no native iPad version) and Bambu Studio (for Windows, macOS, and Linux). Handy works OK for sending ready-made models from MakerWorld (a mostly community-driven but Bambu-developer repository for 3D printable files) and for monitoring prints once they've started. But I'll mostly be relaying my experience with Bambu Studio, a much more fully featured app. Neither app requires sign-in, at least not yet, but the path of least resistance is to sign into your printer and apps with the same account to enable easy communication and syncing. Bambu Studio: A primer Bambu Studio is what's known in the hobby as a "slicer," software that takes existing 3D models output by common CAD programs (Tinkercad, FreeCAD, SolidWorks, Autodesk Fusion, others) and converts them into a set of specific movement instructions that the printer can follow. Bambu Studio allows you to do some basic modification of existing models—cloning parts, resizing them, adding supports for overhanging bits that would otherwise droop down, and a few other functions—but it's primarily there for opening files, choosing a few settings, and sending them off to the printer to become tangible objects. Bambu Studio isn't the most approachable application, but if you've made it this far, it shouldn't be totally beyond your comprehension. For first-time setup, you'll choose your model of printer (all Bambu models and a healthy selection of third-party printers are officially supported), leave the filament settings as they are, and sign in if you want to use Bambu's cloud services. These sync printer settings and keep track of the models you save and download from MakerWorld, but a non-cloud LAN mode is available for the Bambu skeptics and privacy-conscious. For any newbie, pretty much all you need to do is connect your printer, open a .3MF or .STL file you've downloaded from MakerWorld or elsewhere, select your filament from the drop-down menu, click "slice plate," and then click "print." Things like the default 0.4 mm nozzle size and Bambu's included Textured PEI Build Plate are generally already factored in, though you may need to double-check these selections when you open a file for the first time. When you slice your build plate for the first time, the app will spit a pile of numbers back at you. There are two important ones for 3D printing neophytes to track. One is the "total filament" figure, which tells you how many grams of filament the printer will use to make your model (filament typically comes in 1 kg spools, and the printer generally won't track usage for you, so if you want to avoid running out in the middle of the job, you may want to keep track of what you're using). The second is the "total time" figure, which tells you how long the entire print will take from the first calibration steps to the end of the job. Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you'll manage multicolor printing. Andrew Cunningham Selecting your filament and/or temperature presets. If you have the Automatic Material System (AMS), this is also where you'll manage multicolor printing. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham The main way to tweak print quality is to adjust the height of the layers that the A1 lays down. Andrew Cunningham Adding some additional infill can add some strength to prints, though 15 percent usually gives a decent amount of strength without overusing filament. Andrew Cunningham For some prints, scaling them up or down a bit can make them fit your needs better. Andrew Cunningham For items that are small enough, you can print a few at once using the clone function. For filaments with a gradient, this also makes the gradient effect more pronounced. Andrew Cunningham Bambu Studio estimates the amount of filament you'll use and the amount of time a print will take. Filament usually comes in 1 kg spools. Andrew Cunningham When selecting filament, people who stick to Bambu's first-party spools will have the easiest time, since optimal settings are already programmed into the app. But I've had almost zero trouble with the "generic" presets and the spools of generic Inland-branded filament I've bought from our local Micro Center, at least when sticking to PLA (polylactic acid, the most common and generally the easiest-to-print of the different kinds of filament you can buy). But we'll dive deeper into plastics in part 2 of this series. I won't pretend I'm skilled enough to do a deep dive on every single setting that Bambu Studio gives you access to, but here are a few of the odds and ends I've found most useful: The "clone" function, accessed by right-clicking an object and clicking "clone." Useful if you'd like to fit several copies of an object on the build plate at once, especially if you're using a filament with a color gradient and you'd like to make the gradient effect more pronounced by spreading it out over a bunch of prints. The "arrange all objects" function, the fourth button from the left under the "prepare" tab. Did you just clone a bunch of objects? Did you delete an individual object from a model because you didn't need to print that part? Bambu Studio will arrange everything on your build plate to optimize the use of space. Layer height, located in the sidebar directly beneath "Process" (which is directly underneath the area where you select your filament. For many functional parts, the standard 0.2 mm layer height is fine. Going with thinner layer heights adds to the printing time but can preserve more detail on prints that have a lot of it and slightly reduce the visible layer lines that give 3D-printed objects their distinct look (for better or worse). Thicker layer heights do the opposite, slightly reducing the amount of time a model takes to print but preserving less detail. Infill percentage and wall loops, located in the Strength tab beneath the "Process" sidebar item. For most everyday prints, you don't need to worry about messing with these settings much; the infill percentage determines the amount of your print's interior that's plastic and the part that's empty space (15 percent is a good happy medium most of the time between maintaining rigidity and overusing plastic). The number of wall loops determines how many layers the printer uses for the outside surface of the print, with more walls using more plastic but also adding a bit of extra strength and rigidity to functional prints that need it (think hooks, hangers, shelves and brackets, and other things that will be asked to bear some weight). My first prints A humble start: My very first print was a wall bracket for the remote for my office's ceiling fan. Credit: Andrew Cunningham When given the opportunity to use a 3D printer, my mind went first to aggressively practical stuff—prints for organizing the odds and ends that eternally float around my office or desk. When we moved into our current house, only one of the bedrooms had a ceiling fan installed. I put up remote-controlled ceiling fans in all the other bedrooms myself. And all those fans, except one, came with a wall-mounted caddy to hold the remote control. The first thing I decided to print was a wall-mounted holder for that remote control. MakerWorld is just one of several resources for ready-made 3D-printable files, but the ease with which I found a Hampton Bay Ceiling Fan Remote Wall Mount is pretty representative of my experience so far. At this point in the life cycle of home 3D printing, if you can think about it and it's not a terrible idea, you can usually find someone out there who has made something close to what you're looking for. I loaded up my black roll of PLA plastic—generally the cheapest, easiest-to-buy, easiest-to-work-with kind of 3D printer filament, though not always the best for prints that need more structural integrity—into the basic roll-holder that comes with the A1, downloaded that 3MF file, opened it in Bambu Studio, sliced the file, and hit print. It felt like there should have been extra steps in there somewhere. But that's all it took to kick the printer into action. After a few minutes of warmup—by default, the A1 has a thorough pre-print setup process where it checks the levelness of the bed and tests the flow rate of your filament for a few minutes before it begins printing anything—the nozzle started laying plastic down on my build plate, and inside of an hour or so, I had my first 3D-printed object. Print No. 2 was another wall bracket, this time for my gaming PC's gamepad and headset. Credit: Andrew Cunningham It wears off a bit after you successfully execute a print, but I still haven't quite lost the feeling of magic of printing out a fully 3D object that comes off the plate and then just exists in space along with me and all the store-bought objects in my office. The remote holder was, as I'd learn, a fairly simple print made under near-ideal conditions. But it was an easy success to start off with, and that success can help embolden you and draw you in, inviting more printing and more experimentation. And the more you experiment, the more you inevitably learn. This time, I talked about what I learned about basic terminology and the different kinds of plastics most commonly used by home 3D printers. Next time, I'll talk about some of the pitfalls I ran into after my initial successes, what I learned about using Bambu Studio, what I've learned about fine-tuning settings to get good results, and a whole bunch of 3D-printable upgrades and mods available for the A1. Andrew Cunningham Senior Technology Reporter Andrew Cunningham Senior Technology Reporter Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue. 21 Comments
    0 Commentaires 0 Parts
  • Video games' soaring prices have a cost beyond your wallet - the concept of ownership itself

    Video games' soaring prices have a cost beyond your wallet - the concept of ownership itself
    As the industry's big squeeze reaches consumers, a grim bargain emerges.

    Image credit: Adobe Stock, Microsoft

    Opinion

    by Chris Tapsell
    Deputy Editor

    Published on May 22, 2025

    Earlier this month, Microsoft bumped up the prices of its entire range of Xbox consoles, first-party video games, and mostof its accessories. It comes a few weeks after Nintendo revealed a £396 Switch 2, with £75 copies of its own first-party fare in Mario Kart World, and a few months after Sony launched the exorbitant £700 PS5 Pro, a £40 price rise for its all-digital console in the UK, the second of this generation, and news that it's considering even more price rises in the months to come.
    The suspicion - or depending on where you live, perhaps hope - had been that when Donald Trump's ludicrously flip-flopping, self-defeating tariffs came into play, that the US would bear the brunt of it. The reality is that we're still waiting on the full effects. But it's also clear, already, that this is far from just an American problem. The platform-holders are already spreading the costs, presumably to avoid an outright doubling of prices in one of their largest markets. PS5s in Japan now cost £170 more than they did at launch.
    That price rise, mind, took place long before the tariffs, as did the £700 PS5 Pro, and the creeping costs of subscriptions such as Game Pass and PS Plus. Nor is it immediately clear how that justifies charging for, say, a copy of Borderlands 4, a price which hasn't been confirmed but which has still been justified by the ever graceful Randy Pitchford, a man who seems to stride across the world with one foot perpetually bared and ready to be put, squelching, square in it, and who says true fans will still "find a way" to buy his game.
    The truth is inflation has been at it here for a while, and that inflation is a funny beast, one which often comes with an awkward mix of genuine unavoidability - tariffs, wars, pandemics - and concealed opportunism. Games are their own case amongst the many, their prices instead impacted more by the cost of labour, which soars not because developers are paid particularly wellbut because of the continued, lagging impact of their executives' total miscalculation, in assuming triple-A budgets and timescales could continue growing exponentially. And by said opportunism - peep how long it took for Microsoft and the like to announce those bumped prices after Nintendo came in with Mario Kart at £75.
    Anyway, the causes are, in a sense, kind of moot. The result of all this squeezing from near enough all angles of gaming's corporate world is less a pincer manoeuvre on the consumer than a suffocating, immaculately executed full-court press, a full team hurtling with ruthless speed towards the poor unwitting sucker at home on the sofa. Identifying whether gaming costs a fortune now for reasons we can or can't sympathise with does little to change the fact that gaming costs a fortune. And, to be clear, it really does cost a fortune.

    Things are getting very expensive in the world of video games. £700 for a PS5 Pro! | Image credit: Eurogamer

    Whenever complaints about video game prices come up there is naturally a bit of pushback - games have always been expensive! What about the 90s! - usually via attempts to draw conclusions from economic data. Normally I'd be all on board with this - numbers can't lie! - but in this case it's a little different. Numbers can't lie, but they can, sometimes, be manipulated to prove almost anything you want - or just as often, simply misunderstood to the same ends.Instead, it's worth remembering that economics isn't just a numerical science. It is also a behavioural one - a psychological one. The impact of pricing is as much in the mind as it is on the spreadsheet, hence these very real notions of "consumer confidence" and pricing that continues to end in ".99". And so sometimes with pricing I find it helps to borrow another phrase from sport, alongside that full-court press, in the "eye test". Sports scouts use all kinds of numerical data to analyse prospective players these days, but the best ones still marry that with a bit of old-school viewing in the flesh. If a player looks good on paper and passes the eye test, they're probably the real deal. Likewise, if the impact of buying an video game at full price looks unclear in the data, but to your human eye feels about as whince-inducing as biting into a raw onion like it's an apple, and then rubbing said raw onion all over said eye, it's probably extremely bloody expensive and you should stop trying to be clever.
    Video games, to me, do feel bloody expensive. If I weren't in the incredibly fortunate position of being able to source or expense most of them for work I am genuinely unsure if I'd be continuing with them as a hobby - at least beyond shifting my patterns, as so many players have over the years, away from premium console and PC games to the forever-tempting, free-to-play time-vampires like Fortnite or League of Legends. Which leads, finally, to the real point here: that there is another cost to rising game and console prices, beyond the one hitting you square in the wallet.

    How much is GTA 6 going to cost? or more? | Image credit: Rockstar

    The other cost - perhaps the real cost, when things settle - is the notion of ownership itself. Plenty of physical media collectors, aficionados and diehards will tell you this has been locked in the sights of this industry for a long time, of course. They will point to gaming's sister entertainment industries of music, film and television, and the paradigm shift to streaming in each, as a sign of the inevitability of it all. And they will undoubtedly have a point. But this step change in the cost of gaming will only be an accelerant.
    Understanding that only takes a quick glance at the strategy of, say, Xbox in recent years. While Nintendo is still largely adhering to the buy-it-outright tradition and Sony is busy shooting off its toes with live service-shaped bullets, Microsoft has, like it or not, positioned itself rather deftly. After jacking up the cost of its flatlining hardware and platform-agnostic games, Xbox, its execs would surely argue, is also now rather counterintuitively the home of value gaming - if only because Microsoft itself is the one hoiking up the cost of your main alternative. Because supplanting the waning old faithfuls in this kind of scenario - trade-ins, short-term rentals - is, you guessed it, Game Pass.
    You could even argue the consoles are factored in here too. Microsoft, with its "this is an Xbox" campaign and long-stated ambition to reach players in the billions, has made it plain that it doesn't care where you play its games, as long as you're playing them. When all physical consoles are jumping up in price, thanks to that rising tide effect of inflation, the platform that lets you spend £15 a month to stream Clair Obscur: Expedition 33, Oblivion Remastered and the latest Doom straight to your TV without even buying one is, at least in theorylooking like quite an attractive proposition.
    Xbox, for its part, has been chipping away at this idea for a while - we at Eurogamer had opinions about team green's disregard for game ownership as far back as the reveal of the Xbox One, in the ancient times of 2013. Then it was a different method, the once-horrifying face of digital rights management, or DRM, along with regulated digital game sharing and online-only requirements. Here in 2025, with that disdain now platform-agnostic, and where games are being disappeared from people's libraries, platforms like Steam are, by law, forced to remind you that you're not actually buying your games at all, where older games are increasingly only playable via subscriptions to Nintendo, Sony, and now Xbox, and bosses are making wild claims about AI's ability to "preserve" old games by making terrible facsimiles of them, that seems slightly quaint.
    More directly, Xbox has been talking about this very openly since at least 2021. As Ben Decker, then head of gaming services marketing at Xbox, said to me at the time: "Our goal for Xbox Game Pass really ladders up to our goal at Xbox, to reach the more than 3 billion gamers worldwide… we are building a future with this in mind."
    Four years on, that future might be now. Jacking up the cost of games and consoles alone won't do anything to grow gaming's userbase, that being the touted panacea still by the industry's top brass. Quite the opposite, obviously. But funneling more and more core players away from owning games, and towards a newly incentivised world where they merely pay a comparatively low monthly fee to access them, might just. How much a difference that will truly make, and the consequences of it, remain up for debate of course. We've seen the impact of streaming on the other entertainment industries in turn, none for the better, but games are a medium of their own.
    Perhaps there's still a little room for optimism. Against the tide there are still organisations like Does It Play? and the Game History Foundation, or platforms such as itch.io and GOG, that exist precisely because of the growing resistance to that current. Just this week, Lost in Cult launched a new wave of luxurious, always-playable physical editions of acclaimed games, another small act of defiance - though perhaps another sign things are going the way of film and music, where purists splurge on vinyl and Criterion Collection BluRays but the vast majority remain on Netflix and Spotify. And as uncomfortable as it may be to hear for those - including this author! - who wish for this medium to be preserved and cared for like any other great artform, there will be some who argue that a model where more games can be enjoyed by more people, for a lower cost, is worth it.

    Game Pass often offers great value, but the library is always in a state of flux. Collectors may need to start looking at high-end physical editions. | Image credit: Microsoft

    There's also another point to bear in mind here. Nightmarish as it may be for preservation and consumer rights, against the backdrop of endless layoffs and instability many developers tout the stability of a predefined Game Pass or PS Plus deal over taking a punt in the increasingly crowded, choppy seas of the open market. Bethesda this week has just boasted Doom: The Dark Ages' achievement of becoming the most widely-playedDoom game ever. That despite it reaching only a fraction of peak Steam concurrents in the same period as its predecessor, Doom: Eternal - a sign, barring some surprise shift away from PC gaming to consoles, that people really are beginning to choose playing games on Game Pass over buying them outright. The likes of Remedy and Rebellion tout PS Plus and Game Pass as stabilisers, or even accelerants, for their games launching straight onto the services. And independent studios and publishers of varying sizes pre-empted that when we spoke to them for a piece about this exact this point, more than four years ago - in a sense, we're still waiting for a conclusive answer to a question we first began investigating back in 2021: Is Xbox Game Pass just too good to be true?
    We've talked, at this point, at great length about how this year would be make-or-break for the triple-A model in particular. About how the likes of Xbox, or Warner Bros., or the many others have lost sight of their purpose - and in the process, their path to sustainability - in the quest for exponential growth. How £700 Pro edition consoles are an argument against Pro editions altogether. And about how, it's becoming clear, the old industry we once knew is no more, with its new form still yet to take shape.
    There's an argument now, however, that a grim new normal for preservation and ownership may, just as grimly, be exactly what the industry needs to save itself. It would be in line with what we've seen from the wider world of technology and media - and really, the wider world itself. A shift from owning to renting. That old chestnut of all the capital slowly rising, curdling at the top. The public as mere tenants in a house of culture owned by someone, somewhere else. It needn't have to be this way, of course. If this all sounds like a particularly unfavourable trade-in, remember this too: it's one that could almost certainly have been avoided.
    #video #games039 #soaring #prices #have
    Video games' soaring prices have a cost beyond your wallet - the concept of ownership itself
    Video games' soaring prices have a cost beyond your wallet - the concept of ownership itself As the industry's big squeeze reaches consumers, a grim bargain emerges. Image credit: Adobe Stock, Microsoft Opinion by Chris Tapsell Deputy Editor Published on May 22, 2025 Earlier this month, Microsoft bumped up the prices of its entire range of Xbox consoles, first-party video games, and mostof its accessories. It comes a few weeks after Nintendo revealed a £396 Switch 2, with £75 copies of its own first-party fare in Mario Kart World, and a few months after Sony launched the exorbitant £700 PS5 Pro, a £40 price rise for its all-digital console in the UK, the second of this generation, and news that it's considering even more price rises in the months to come. The suspicion - or depending on where you live, perhaps hope - had been that when Donald Trump's ludicrously flip-flopping, self-defeating tariffs came into play, that the US would bear the brunt of it. The reality is that we're still waiting on the full effects. But it's also clear, already, that this is far from just an American problem. The platform-holders are already spreading the costs, presumably to avoid an outright doubling of prices in one of their largest markets. PS5s in Japan now cost £170 more than they did at launch. That price rise, mind, took place long before the tariffs, as did the £700 PS5 Pro, and the creeping costs of subscriptions such as Game Pass and PS Plus. Nor is it immediately clear how that justifies charging for, say, a copy of Borderlands 4, a price which hasn't been confirmed but which has still been justified by the ever graceful Randy Pitchford, a man who seems to stride across the world with one foot perpetually bared and ready to be put, squelching, square in it, and who says true fans will still "find a way" to buy his game. The truth is inflation has been at it here for a while, and that inflation is a funny beast, one which often comes with an awkward mix of genuine unavoidability - tariffs, wars, pandemics - and concealed opportunism. Games are their own case amongst the many, their prices instead impacted more by the cost of labour, which soars not because developers are paid particularly wellbut because of the continued, lagging impact of their executives' total miscalculation, in assuming triple-A budgets and timescales could continue growing exponentially. And by said opportunism - peep how long it took for Microsoft and the like to announce those bumped prices after Nintendo came in with Mario Kart at £75. Anyway, the causes are, in a sense, kind of moot. The result of all this squeezing from near enough all angles of gaming's corporate world is less a pincer manoeuvre on the consumer than a suffocating, immaculately executed full-court press, a full team hurtling with ruthless speed towards the poor unwitting sucker at home on the sofa. Identifying whether gaming costs a fortune now for reasons we can or can't sympathise with does little to change the fact that gaming costs a fortune. And, to be clear, it really does cost a fortune. Things are getting very expensive in the world of video games. £700 for a PS5 Pro! | Image credit: Eurogamer Whenever complaints about video game prices come up there is naturally a bit of pushback - games have always been expensive! What about the 90s! - usually via attempts to draw conclusions from economic data. Normally I'd be all on board with this - numbers can't lie! - but in this case it's a little different. Numbers can't lie, but they can, sometimes, be manipulated to prove almost anything you want - or just as often, simply misunderstood to the same ends.Instead, it's worth remembering that economics isn't just a numerical science. It is also a behavioural one - a psychological one. The impact of pricing is as much in the mind as it is on the spreadsheet, hence these very real notions of "consumer confidence" and pricing that continues to end in ".99". And so sometimes with pricing I find it helps to borrow another phrase from sport, alongside that full-court press, in the "eye test". Sports scouts use all kinds of numerical data to analyse prospective players these days, but the best ones still marry that with a bit of old-school viewing in the flesh. If a player looks good on paper and passes the eye test, they're probably the real deal. Likewise, if the impact of buying an video game at full price looks unclear in the data, but to your human eye feels about as whince-inducing as biting into a raw onion like it's an apple, and then rubbing said raw onion all over said eye, it's probably extremely bloody expensive and you should stop trying to be clever. Video games, to me, do feel bloody expensive. If I weren't in the incredibly fortunate position of being able to source or expense most of them for work I am genuinely unsure if I'd be continuing with them as a hobby - at least beyond shifting my patterns, as so many players have over the years, away from premium console and PC games to the forever-tempting, free-to-play time-vampires like Fortnite or League of Legends. Which leads, finally, to the real point here: that there is another cost to rising game and console prices, beyond the one hitting you square in the wallet. How much is GTA 6 going to cost? or more? | Image credit: Rockstar The other cost - perhaps the real cost, when things settle - is the notion of ownership itself. Plenty of physical media collectors, aficionados and diehards will tell you this has been locked in the sights of this industry for a long time, of course. They will point to gaming's sister entertainment industries of music, film and television, and the paradigm shift to streaming in each, as a sign of the inevitability of it all. And they will undoubtedly have a point. But this step change in the cost of gaming will only be an accelerant. Understanding that only takes a quick glance at the strategy of, say, Xbox in recent years. While Nintendo is still largely adhering to the buy-it-outright tradition and Sony is busy shooting off its toes with live service-shaped bullets, Microsoft has, like it or not, positioned itself rather deftly. After jacking up the cost of its flatlining hardware and platform-agnostic games, Xbox, its execs would surely argue, is also now rather counterintuitively the home of value gaming - if only because Microsoft itself is the one hoiking up the cost of your main alternative. Because supplanting the waning old faithfuls in this kind of scenario - trade-ins, short-term rentals - is, you guessed it, Game Pass. You could even argue the consoles are factored in here too. Microsoft, with its "this is an Xbox" campaign and long-stated ambition to reach players in the billions, has made it plain that it doesn't care where you play its games, as long as you're playing them. When all physical consoles are jumping up in price, thanks to that rising tide effect of inflation, the platform that lets you spend £15 a month to stream Clair Obscur: Expedition 33, Oblivion Remastered and the latest Doom straight to your TV without even buying one is, at least in theorylooking like quite an attractive proposition. Xbox, for its part, has been chipping away at this idea for a while - we at Eurogamer had opinions about team green's disregard for game ownership as far back as the reveal of the Xbox One, in the ancient times of 2013. Then it was a different method, the once-horrifying face of digital rights management, or DRM, along with regulated digital game sharing and online-only requirements. Here in 2025, with that disdain now platform-agnostic, and where games are being disappeared from people's libraries, platforms like Steam are, by law, forced to remind you that you're not actually buying your games at all, where older games are increasingly only playable via subscriptions to Nintendo, Sony, and now Xbox, and bosses are making wild claims about AI's ability to "preserve" old games by making terrible facsimiles of them, that seems slightly quaint. More directly, Xbox has been talking about this very openly since at least 2021. As Ben Decker, then head of gaming services marketing at Xbox, said to me at the time: "Our goal for Xbox Game Pass really ladders up to our goal at Xbox, to reach the more than 3 billion gamers worldwide… we are building a future with this in mind." Four years on, that future might be now. Jacking up the cost of games and consoles alone won't do anything to grow gaming's userbase, that being the touted panacea still by the industry's top brass. Quite the opposite, obviously. But funneling more and more core players away from owning games, and towards a newly incentivised world where they merely pay a comparatively low monthly fee to access them, might just. How much a difference that will truly make, and the consequences of it, remain up for debate of course. We've seen the impact of streaming on the other entertainment industries in turn, none for the better, but games are a medium of their own. Perhaps there's still a little room for optimism. Against the tide there are still organisations like Does It Play? and the Game History Foundation, or platforms such as itch.io and GOG, that exist precisely because of the growing resistance to that current. Just this week, Lost in Cult launched a new wave of luxurious, always-playable physical editions of acclaimed games, another small act of defiance - though perhaps another sign things are going the way of film and music, where purists splurge on vinyl and Criterion Collection BluRays but the vast majority remain on Netflix and Spotify. And as uncomfortable as it may be to hear for those - including this author! - who wish for this medium to be preserved and cared for like any other great artform, there will be some who argue that a model where more games can be enjoyed by more people, for a lower cost, is worth it. Game Pass often offers great value, but the library is always in a state of flux. Collectors may need to start looking at high-end physical editions. | Image credit: Microsoft There's also another point to bear in mind here. Nightmarish as it may be for preservation and consumer rights, against the backdrop of endless layoffs and instability many developers tout the stability of a predefined Game Pass or PS Plus deal over taking a punt in the increasingly crowded, choppy seas of the open market. Bethesda this week has just boasted Doom: The Dark Ages' achievement of becoming the most widely-playedDoom game ever. That despite it reaching only a fraction of peak Steam concurrents in the same period as its predecessor, Doom: Eternal - a sign, barring some surprise shift away from PC gaming to consoles, that people really are beginning to choose playing games on Game Pass over buying them outright. The likes of Remedy and Rebellion tout PS Plus and Game Pass as stabilisers, or even accelerants, for their games launching straight onto the services. And independent studios and publishers of varying sizes pre-empted that when we spoke to them for a piece about this exact this point, more than four years ago - in a sense, we're still waiting for a conclusive answer to a question we first began investigating back in 2021: Is Xbox Game Pass just too good to be true? We've talked, at this point, at great length about how this year would be make-or-break for the triple-A model in particular. About how the likes of Xbox, or Warner Bros., or the many others have lost sight of their purpose - and in the process, their path to sustainability - in the quest for exponential growth. How £700 Pro edition consoles are an argument against Pro editions altogether. And about how, it's becoming clear, the old industry we once knew is no more, with its new form still yet to take shape. There's an argument now, however, that a grim new normal for preservation and ownership may, just as grimly, be exactly what the industry needs to save itself. It would be in line with what we've seen from the wider world of technology and media - and really, the wider world itself. A shift from owning to renting. That old chestnut of all the capital slowly rising, curdling at the top. The public as mere tenants in a house of culture owned by someone, somewhere else. It needn't have to be this way, of course. If this all sounds like a particularly unfavourable trade-in, remember this too: it's one that could almost certainly have been avoided. #video #games039 #soaring #prices #have
    WWW.EUROGAMER.NET
    Video games' soaring prices have a cost beyond your wallet - the concept of ownership itself
    Video games' soaring prices have a cost beyond your wallet - the concept of ownership itself As the industry's big squeeze reaches consumers, a grim bargain emerges. Image credit: Adobe Stock, Microsoft Opinion by Chris Tapsell Deputy Editor Published on May 22, 2025 Earlier this month, Microsoft bumped up the prices of its entire range of Xbox consoles, first-party video games, and most (or in the US, all) of its accessories. It comes a few weeks after Nintendo revealed a £396 Switch 2, with £75 copies of its own first-party fare in Mario Kart World, and a few months after Sony launched the exorbitant £700 PS5 Pro (stand and disc drive not included), a £40 price rise for its all-digital console in the UK, the second of this generation, and news that it's considering even more price rises in the months to come. The suspicion - or depending on where you live, perhaps hope - had been that when Donald Trump's ludicrously flip-flopping, self-defeating tariffs came into play, that the US would bear the brunt of it. The reality is that we're still waiting on the full effects. But it's also clear, already, that this is far from just an American problem. The platform-holders are already spreading the costs, presumably to avoid an outright doubling of prices in one of their largest markets. PS5s in Japan now cost £170 more than they did at launch. That price rise, mind, took place long before the tariffs, as did the £700 PS5 Pro (stand and disc drive not included!), and the creeping costs of subscriptions such as Game Pass and PS Plus. Nor is it immediately clear how that justifies charging $80 for, say, a copy of Borderlands 4, a price which hasn't been confirmed but which has still been justified by the ever graceful Randy Pitchford, a man who seems to stride across the world with one foot perpetually bared and ready to be put, squelching, square in it, and who says true fans will still "find a way" to buy his game. The truth is inflation has been at it here for a while, and that inflation is a funny beast, one which often comes with an awkward mix of genuine unavoidability - tariffs, wars, pandemics - and concealed opportunism. Games are their own case amongst the many, their prices instead impacted more by the cost of labour, which soars not because developers are paid particularly well (I can hear their scoffs from here) but because of the continued, lagging impact of their executives' total miscalculation, in assuming triple-A budgets and timescales could continue growing exponentially. And by said opportunism - peep how long it took for Microsoft and the like to announce those bumped prices after Nintendo came in with Mario Kart at £75. Anyway, the causes are, in a sense, kind of moot. The result of all this squeezing from near enough all angles of gaming's corporate world is less a pincer manoeuvre on the consumer than a suffocating, immaculately executed full-court press, a full team hurtling with ruthless speed towards the poor unwitting sucker at home on the sofa. Identifying whether gaming costs a fortune now for reasons we can or can't sympathise with does little to change the fact that gaming costs a fortune. And, to be clear, it really does cost a fortune. Things are getting very expensive in the world of video games. £700 for a PS5 Pro! | Image credit: Eurogamer Whenever complaints about video game prices come up there is naturally a bit of pushback - games have always been expensive! What about the 90s! - usually via attempts to draw conclusions from economic data. Normally I'd be all on board with this - numbers can't lie! - but in this case it's a little different. Numbers can't lie, but they can, sometimes, be manipulated to prove almost anything you want - or just as often, simply misunderstood to the same ends. (Take most back-of-a-cigarette-packet attempts at doing the maths here, and the infinite considerations to bear in mind: Have you adjusted for inflation? How about for cost of living, as if the rising price of everything else may somehow make expensive games more palatable? Or share of disposable average household salary? For exchange rates? Purchasing power parity? Did you use the mean or the median for average income? What about cost-per-frame of performance? How much value do you place on moving from 1080p to 1440p? Does anyone sit close enough to their TV to tell enough of a difference with 4K?! Ahhhhh!) Instead, it's worth remembering that economics isn't just a numerical science. It is also a behavioural one - a psychological one. The impact of pricing is as much in the mind as it is on the spreadsheet, hence these very real notions of "consumer confidence" and pricing that continues to end in ".99". And so sometimes with pricing I find it helps to borrow another phrase from sport, alongside that full-court press, in the "eye test". Sports scouts use all kinds of numerical data to analyse prospective players these days, but the best ones still marry that with a bit of old-school viewing in the flesh. If a player looks good on paper and passes the eye test, they're probably the real deal. Likewise, if the impact of buying an $80 video game at full price looks unclear in the data, but to your human eye feels about as whince-inducing as biting into a raw onion like it's an apple, and then rubbing said raw onion all over said eye, it's probably extremely bloody expensive and you should stop trying to be clever. Video games, to me, do feel bloody expensive. If I weren't in the incredibly fortunate position of being able to source or expense most of them for work I am genuinely unsure if I'd be continuing with them as a hobby - at least beyond shifting my patterns, as so many players have over the years, away from premium console and PC games to the forever-tempting, free-to-play time-vampires like Fortnite or League of Legends. Which leads, finally, to the real point here: that there is another cost to rising game and console prices, beyond the one hitting you square in the wallet. How much is GTA 6 going to cost? $80 or more? | Image credit: Rockstar The other cost - perhaps the real cost, when things settle - is the notion of ownership itself. Plenty of physical media collectors, aficionados and diehards will tell you this has been locked in the sights of this industry for a long time, of course. They will point to gaming's sister entertainment industries of music, film and television, and the paradigm shift to streaming in each, as a sign of the inevitability of it all. And they will undoubtedly have a point. But this step change in the cost of gaming will only be an accelerant. Understanding that only takes a quick glance at the strategy of, say, Xbox in recent years. While Nintendo is still largely adhering to the buy-it-outright tradition and Sony is busy shooting off its toes with live service-shaped bullets, Microsoft has, like it or not, positioned itself rather deftly. After jacking up the cost of its flatlining hardware and platform-agnostic games, Xbox, its execs would surely argue, is also now rather counterintuitively the home of value gaming - if only because Microsoft itself is the one hoiking up the cost of your main alternative. Because supplanting the waning old faithfuls in this kind of scenario - trade-ins, short-term rentals - is, you guessed it, Game Pass. You could even argue the consoles are factored in here too. Microsoft, with its "this is an Xbox" campaign and long-stated ambition to reach players in the billions, has made it plain that it doesn't care where you play its games, as long as you're playing them. When all physical consoles are jumping up in price, thanks to that rising tide effect of inflation, the platform that lets you spend £15 a month to stream Clair Obscur: Expedition 33, Oblivion Remastered and the latest Doom straight to your TV without even buying one is, at least in theory (and not forgetting the BDS call for a boycott of them) looking like quite an attractive proposition. Xbox, for its part, has been chipping away at this idea for a while - we at Eurogamer had opinions about team green's disregard for game ownership as far back as the reveal of the Xbox One, in the ancient times of 2013. Then it was a different method, the once-horrifying face of digital rights management, or DRM, along with regulated digital game sharing and online-only requirements. Here in 2025, with that disdain now platform-agnostic, and where games are being disappeared from people's libraries, platforms like Steam are, by law, forced to remind you that you're not actually buying your games at all, where older games are increasingly only playable via subscriptions to Nintendo, Sony, and now Xbox, and bosses are making wild claims about AI's ability to "preserve" old games by making terrible facsimiles of them, that seems slightly quaint. More directly, Xbox has been talking about this very openly since at least 2021. As Ben Decker, then head of gaming services marketing at Xbox, said to me at the time: "Our goal for Xbox Game Pass really ladders up to our goal at Xbox, to reach the more than 3 billion gamers worldwide… we are building a future with this in mind." Four years on, that future might be now. Jacking up the cost of games and consoles alone won't do anything to grow gaming's userbase, that being the touted panacea still by the industry's top brass. Quite the opposite, obviously (although the Switch 2 looks set to still be massive, and the PS5, with all its price rises, still tracks in line with the price-cut PS4). But funneling more and more core players away from owning games, and towards a newly incentivised world where they merely pay a comparatively low monthly fee to access them, might just. How much a difference that will truly make, and the consequences of it, remain up for debate of course. We've seen the impact of streaming on the other entertainment industries in turn, none for the better, but games are a medium of their own. Perhaps there's still a little room for optimism. Against the tide there are still organisations like Does It Play? and the Game History Foundation, or platforms such as itch.io and GOG (nothing without its flaws, of course), that exist precisely because of the growing resistance to that current. Just this week, Lost in Cult launched a new wave of luxurious, always-playable physical editions of acclaimed games, another small act of defiance - though perhaps another sign things are going the way of film and music, where purists splurge on vinyl and Criterion Collection BluRays but the vast majority remain on Netflix and Spotify. And as uncomfortable as it may be to hear for those - including this author! - who wish for this medium to be preserved and cared for like any other great artform, there will be some who argue that a model where more games can be enjoyed by more people, for a lower cost, is worth it. Game Pass often offers great value, but the library is always in a state of flux. Collectors may need to start looking at high-end physical editions. | Image credit: Microsoft There's also another point to bear in mind here. Nightmarish as it may be for preservation and consumer rights, against the backdrop of endless layoffs and instability many developers tout the stability of a predefined Game Pass or PS Plus deal over taking a punt in the increasingly crowded, choppy seas of the open market. Bethesda this week has just boasted Doom: The Dark Ages' achievement of becoming the most widely-played (note: not fastest selling) Doom game ever. That despite it reaching only a fraction of peak Steam concurrents in the same period as its predecessor, Doom: Eternal - a sign, barring some surprise shift away from PC gaming to consoles, that people really are beginning to choose playing games on Game Pass over buying them outright. The likes of Remedy and Rebellion tout PS Plus and Game Pass as stabilisers, or even accelerants, for their games launching straight onto the services. And independent studios and publishers of varying sizes pre-empted that when we spoke to them for a piece about this exact this point, more than four years ago - in a sense, we're still waiting for a conclusive answer to a question we first began investigating back in 2021: Is Xbox Game Pass just too good to be true? We've talked, at this point, at great length about how this year would be make-or-break for the triple-A model in particular. About how the likes of Xbox, or Warner Bros., or the many others have lost sight of their purpose - and in the process, their path to sustainability - in the quest for exponential growth. How £700 Pro edition consoles are an argument against Pro editions altogether. And about how, it's becoming clear, the old industry we once knew is no more, with its new form still yet to take shape. There's an argument now, however, that a grim new normal for preservation and ownership may, just as grimly, be exactly what the industry needs to save itself. It would be in line with what we've seen from the wider world of technology and media - and really, the wider world itself. A shift from owning to renting. That old chestnut of all the capital slowly rising, curdling at the top. The public as mere tenants in a house of culture owned by someone, somewhere else. It needn't have to be this way, of course. If this all sounds like a particularly unfavourable trade-in, remember this too: it's one that could almost certainly have been avoided.
    0 Commentaires 0 Parts
  • Microsoft Open Sources Windows Subsystem for Linux

    Windows Subsystem for Linuxis now open source, Microsoft said Monday. The tool, which allows developers to run Linux distributions directly in Windows, is available for download, modification, and contribution. "We want Windows to be a great dev box," said Pavan Davuluri, corporate VP at Microsoft. "Having great WSL performance and capabilities" allows developers "to live in the Windows-native experience and take advantage of all they need in Linux."

    First launched in 2016 with an emulated Linux kernel, WSL switched to using the actual Linux kernel in 2019 with WSL 2, improving compatibility. The system has since gained support for GPUs, graphical applications, and systemd. Microsoft significantly refactored core Windows components to make WSL a standalone system before open sourcing it.

    of this story at Slashdot.
    #microsoft #open #sources #windows #subsystem
    Microsoft Open Sources Windows Subsystem for Linux
    Windows Subsystem for Linuxis now open source, Microsoft said Monday. The tool, which allows developers to run Linux distributions directly in Windows, is available for download, modification, and contribution. "We want Windows to be a great dev box," said Pavan Davuluri, corporate VP at Microsoft. "Having great WSL performance and capabilities" allows developers "to live in the Windows-native experience and take advantage of all they need in Linux." First launched in 2016 with an emulated Linux kernel, WSL switched to using the actual Linux kernel in 2019 with WSL 2, improving compatibility. The system has since gained support for GPUs, graphical applications, and systemd. Microsoft significantly refactored core Windows components to make WSL a standalone system before open sourcing it. of this story at Slashdot. #microsoft #open #sources #windows #subsystem
    NEWS.SLASHDOT.ORG
    Microsoft Open Sources Windows Subsystem for Linux
    Windows Subsystem for Linux (WSL) is now open source, Microsoft said Monday. The tool, which allows developers to run Linux distributions directly in Windows, is available for download, modification, and contribution. "We want Windows to be a great dev box," said Pavan Davuluri, corporate VP at Microsoft. "Having great WSL performance and capabilities" allows developers "to live in the Windows-native experience and take advantage of all they need in Linux." First launched in 2016 with an emulated Linux kernel, WSL switched to using the actual Linux kernel in 2019 with WSL 2, improving compatibility. The system has since gained support for GPUs, graphical applications, and systemd. Microsoft significantly refactored core Windows components to make WSL a standalone system before open sourcing it. Read more of this story at Slashdot.
    0 Commentaires 0 Parts
  • Nintendo could become "primary partner for third-party game publishers" over next five years, new analysis claims

    Nintendo could become "primary partner for third-party game publishers" over next five years, new analysis claims
    Analyst DFC also suggests Switich 2 could sell sold over 100m units by the end of 2029

    Image credit: Nintendo

    News

    by Vikki Blake
    Contributor

    Published on May 15, 2025

    Nintendo could become the "primary partner" for third-party publishers in the next five years.
    That's according to new research from DFC Intelligence, via Eurogamer, which projects Nintendo will exceed its own 15m units sold forecast later this year by at least a million, and possibly reach 20 million.
    "Reflecting its historically conservative approach to new product forecasting, Nintendo is estimating sales of 15m Switch 2 units through the end of its fiscal year in March 2026," DFC wrote.
    "When the original Switch launched in March 2017, Nintendo initially projected 10m units for the fiscal year but ultimately sold 15m units in that period. Based on that track record, the fact that Nintendo is forecasting a 15m unit number gives us confidence that they will be able to ramp up supply and navigate tariff challenges."
    By the end of 2029, however, DFC forecasts the Switch 2 will have sold over 100m units, making it "the leading console system by a wide margin."
    "The next few years could see Nintendo for the first time becoming the primary partner for third-party game publishers," the analysis added.
    Earlier this week, Nintendo president Shuntaro Furukawa made it clear that the US tariffs has not been factored into the Switch 2's higher price point. In an earnings call Q&A, Furukawa said the ¥49,980//£395.99 price was determined by manufacturing costs, consumer impressions, market conditions, and exchange rates.
    #nintendo #could #become #quotprimary #partner
    Nintendo could become "primary partner for third-party game publishers" over next five years, new analysis claims
    Nintendo could become "primary partner for third-party game publishers" over next five years, new analysis claims Analyst DFC also suggests Switich 2 could sell sold over 100m units by the end of 2029 Image credit: Nintendo News by Vikki Blake Contributor Published on May 15, 2025 Nintendo could become the "primary partner" for third-party publishers in the next five years. That's according to new research from DFC Intelligence, via Eurogamer, which projects Nintendo will exceed its own 15m units sold forecast later this year by at least a million, and possibly reach 20 million. "Reflecting its historically conservative approach to new product forecasting, Nintendo is estimating sales of 15m Switch 2 units through the end of its fiscal year in March 2026," DFC wrote. "When the original Switch launched in March 2017, Nintendo initially projected 10m units for the fiscal year but ultimately sold 15m units in that period. Based on that track record, the fact that Nintendo is forecasting a 15m unit number gives us confidence that they will be able to ramp up supply and navigate tariff challenges." By the end of 2029, however, DFC forecasts the Switch 2 will have sold over 100m units, making it "the leading console system by a wide margin." "The next few years could see Nintendo for the first time becoming the primary partner for third-party game publishers," the analysis added. Earlier this week, Nintendo president Shuntaro Furukawa made it clear that the US tariffs has not been factored into the Switch 2's higher price point. In an earnings call Q&A, Furukawa said the ¥49,980//£395.99 price was determined by manufacturing costs, consumer impressions, market conditions, and exchange rates. #nintendo #could #become #quotprimary #partner
    WWW.GAMESINDUSTRY.BIZ
    Nintendo could become "primary partner for third-party game publishers" over next five years, new analysis claims
    Nintendo could become "primary partner for third-party game publishers" over next five years, new analysis claims Analyst DFC also suggests Switich 2 could sell sold over 100m units by the end of 2029 Image credit: Nintendo News by Vikki Blake Contributor Published on May 15, 2025 Nintendo could become the "primary partner" for third-party publishers in the next five years. That's according to new research from DFC Intelligence, via Eurogamer, which projects Nintendo will exceed its own 15m units sold forecast later this year by at least a million, and possibly reach 20 million. "Reflecting its historically conservative approach to new product forecasting, Nintendo is estimating sales of 15m Switch 2 units through the end of its fiscal year in March 2026," DFC wrote. "When the original Switch launched in March 2017, Nintendo initially projected 10m units for the fiscal year but ultimately sold 15m units in that period. Based on that track record, the fact that Nintendo is forecasting a 15m unit number gives us confidence that they will be able to ramp up supply and navigate tariff challenges." By the end of 2029, however, DFC forecasts the Switch 2 will have sold over 100m units, making it "the leading console system by a wide margin." "The next few years could see Nintendo for the first time becoming the primary partner for third-party game publishers," the analysis added. Earlier this week, Nintendo president Shuntaro Furukawa made it clear that the US tariffs has not been factored into the Switch 2's higher price point. In an earnings call Q&A, Furukawa said the ¥49,980/$499.99/£395.99 price was determined by manufacturing costs, consumer impressions, market conditions, and exchange rates.
    0 Commentaires 0 Parts
  • A peer’s promise can help kids pass the marshmallow test

    resistance is futile

    A peer’s promise can help kids pass the marshmallow test

    Younger children were slightly more likely to successfully delay gratification than older children.

    Jennifer Ouellette



    May 15, 2025 9:46 am

    |

    0

    For decades, Walter Mischel's "marshmallow test" was viewed as a key predictor for children's future success, but reality is a bit more nuanced.

    Credit:

    Igniter Media

    For decades, Walter Mischel's "marshmallow test" was viewed as a key predictor for children's future success, but reality is a bit more nuanced.

    Credit:

    Igniter Media

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    You've probably heard of the infamous "marshmallow test," in which young children are asked to wait to eat a yummy marshmallow placed in front of them while left alone in a room for 10 to 15 minutes. If they successfully do so, they get a second marshmallow; if not, they don't. The test has become a useful paradigm for scientists interested in studying the various factors that might influence one's ability to delay gratification, thereby promoting social cooperation. According to a paper published in the journal Royal Society Open Science, one factor is trust: If children are paired in a marshmallow test and one promises not to eat their treat for the specified time, the other is much more likely to also refrain from eating it.
    As previously reported, psychologist Walter Mischel's landmark behavioral study involved 600 kids between the ages of four and six, all culled from Stanford University's Bing Nursery School. He would give each child a marshmallow and give them the option of eating it immediately if they chose. But if they could wait 15 minutes, they would get a second marshmallow as a reward. Then Mischel would leave the room, and a hidden video camera would tape what happened next.
    Some kids just ate the marshmallow right away. Others found a handy distraction: covering their eyes, kicking the desk, or poking at the marshmallow with their fingers. Some smelled it, licked it, or took tiny nibbles around the edges. Roughly one-third of the kids held out long enough to earn a second marshmallow. Several years later, Mischel noticed a strong correlation between the success of some of those kids later in lifeand their ability to delay gratification in nursery school. Mischel's follow-up study confirmed the correlation.
    Mischel himself cautioned against over-interpreting the results, emphasizing that children who simply can't hold out for that second marshmallow are not necessarily doomed to a life of failure. A more nuanced picture was offered in a 2018 study that replicated the marshmallow test with preschoolers. It found the same correlation between later achievement and the ability to resist temptation in preschool, but that correlation was much less significant after the researchers factored in such aspects as family background, home environment, and so forth. Attentiveness might be yet another contributing factor, according to a 2019 paper.

    There have also been several studies examining the effects of social interdependence and similar social contexts on children's ability to delay gratification, using variations of the marshmallow test paradigm. For instance, in 2020, a team of German researchers adapted the classic experimental setup using Oreos and vanilla cookies with German and Kenyan schoolchildren, respectively. If both children waited to eat their treat, they received a second cookie as a reward; if one did not wait, neither child received a second cookie. They found that the kids were more likely to delay gratification when they depended on each other, compared to the standard marshmallow test.

    An online paradigm
    Rebecca Koomen, a psychologist now at the University of Manchester, co-authored the 2020 study as well as this latest one, which sought to build on those findings. Koomen et al. structured their experiments similarly, this time recruiting 66 UK children, ages five to six, as subjects. They focused on how promising a partner not to eat a favorite treat could inspire sufficient trust to delay gratification, compared to the social risk of one or both partners breaking that promise. Any parent could tell you that children of this age are really big on the importance of promises, and science largely concurs; a promise has been shown to enhance interdependent cooperation in this age group.
    Koomen and her Manchester colleagues added an extra twist: They conducted their version of the marshmallow test online to test the effectiveness compared to lab-based versions of the experiment."Given face-to-face testing restrictions during the COVID pandemic, this, to our knowledge, represents the first cooperative marshmallow study to be conducted online, thereby adding to the growing body of literature concerning the validity of remote testing methods," they wrote.
    The type of treat was chosen by each child's parents, ensuring it was a favorite: chocolate, candy, biscuits, and marshmallows, mostly, although three kids loved potato chips, fruit, and nuts, respectively. Parents were asked to set up the experiment in a quiet room with minimal potential distractions, outfitted with a webcam to monitor the experiment. Each child was shown a video of a "confederate child" who either clearly promised not to eat the treat or more ambiguously suggested they might succumb and eat their treat.Then the scientist running the experiment would leave the Zoom meeting for an undisclosed period of time, after telling the child that if both of them resisted eating the treat, they would each receive a second one; if one of them failed, neither would be rewarded. Children could not see or communicate with their paired confederates for the duration of the experiment. The scientist returned after ten minutes to see if the child had managed to delay gratification. Once the experiment had ended, the team actually did reward the participant child regardless of the outcome, "to end the study on a positive note."
    The results were controlled for unavoidable accidental distractions, so the paper includes the results from both the full dataset of all 68 participants and a subset of 48 children, excluding those who experienced some type of disruption during the ten-minute experiment. In both cases, children whose confederate clearly promised not to eat their treat waited longer to eat their treat compared to the more ambiguous "social risk" condition. And younger children were slightly more likely to successfully delay gratification than older children, although this result was not statistically significant. The authors suggest this small difference may be due to the fact that older children are more likely to have experienced broken promises, thereby learning "that commitments are not always fulfilled."
    Of course, there are always caveats. For instance, while specific demographic data was not collected, all the children had predominantly white middle-class backgrounds, so the results reflect how typical children in northern England behave in such situations. The authors would like to see their online experiment repeated cross-culturally in the future. And the limitation of one-way communication "likely prevented partners from establishing common ground, namely their mutual commitment to fulfilling their respective roles, which is thought to be a key principle of interdependence," the authors wrote.
    DOI: Royal Society Open Science, 2025. 10.1098/rsos.250392  .

    Jennifer Ouellette
    Senior Writer

    Jennifer Ouellette
    Senior Writer

    Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

    0 Comments
    #peers #promise #can #help #kids
    A peer’s promise can help kids pass the marshmallow test
    resistance is futile A peer’s promise can help kids pass the marshmallow test Younger children were slightly more likely to successfully delay gratification than older children. Jennifer Ouellette – May 15, 2025 9:46 am | 0 For decades, Walter Mischel's "marshmallow test" was viewed as a key predictor for children's future success, but reality is a bit more nuanced. Credit: Igniter Media For decades, Walter Mischel's "marshmallow test" was viewed as a key predictor for children's future success, but reality is a bit more nuanced. Credit: Igniter Media Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more You've probably heard of the infamous "marshmallow test," in which young children are asked to wait to eat a yummy marshmallow placed in front of them while left alone in a room for 10 to 15 minutes. If they successfully do so, they get a second marshmallow; if not, they don't. The test has become a useful paradigm for scientists interested in studying the various factors that might influence one's ability to delay gratification, thereby promoting social cooperation. According to a paper published in the journal Royal Society Open Science, one factor is trust: If children are paired in a marshmallow test and one promises not to eat their treat for the specified time, the other is much more likely to also refrain from eating it. As previously reported, psychologist Walter Mischel's landmark behavioral study involved 600 kids between the ages of four and six, all culled from Stanford University's Bing Nursery School. He would give each child a marshmallow and give them the option of eating it immediately if they chose. But if they could wait 15 minutes, they would get a second marshmallow as a reward. Then Mischel would leave the room, and a hidden video camera would tape what happened next. Some kids just ate the marshmallow right away. Others found a handy distraction: covering their eyes, kicking the desk, or poking at the marshmallow with their fingers. Some smelled it, licked it, or took tiny nibbles around the edges. Roughly one-third of the kids held out long enough to earn a second marshmallow. Several years later, Mischel noticed a strong correlation between the success of some of those kids later in lifeand their ability to delay gratification in nursery school. Mischel's follow-up study confirmed the correlation. Mischel himself cautioned against over-interpreting the results, emphasizing that children who simply can't hold out for that second marshmallow are not necessarily doomed to a life of failure. A more nuanced picture was offered in a 2018 study that replicated the marshmallow test with preschoolers. It found the same correlation between later achievement and the ability to resist temptation in preschool, but that correlation was much less significant after the researchers factored in such aspects as family background, home environment, and so forth. Attentiveness might be yet another contributing factor, according to a 2019 paper. There have also been several studies examining the effects of social interdependence and similar social contexts on children's ability to delay gratification, using variations of the marshmallow test paradigm. For instance, in 2020, a team of German researchers adapted the classic experimental setup using Oreos and vanilla cookies with German and Kenyan schoolchildren, respectively. If both children waited to eat their treat, they received a second cookie as a reward; if one did not wait, neither child received a second cookie. They found that the kids were more likely to delay gratification when they depended on each other, compared to the standard marshmallow test. An online paradigm Rebecca Koomen, a psychologist now at the University of Manchester, co-authored the 2020 study as well as this latest one, which sought to build on those findings. Koomen et al. structured their experiments similarly, this time recruiting 66 UK children, ages five to six, as subjects. They focused on how promising a partner not to eat a favorite treat could inspire sufficient trust to delay gratification, compared to the social risk of one or both partners breaking that promise. Any parent could tell you that children of this age are really big on the importance of promises, and science largely concurs; a promise has been shown to enhance interdependent cooperation in this age group. Koomen and her Manchester colleagues added an extra twist: They conducted their version of the marshmallow test online to test the effectiveness compared to lab-based versions of the experiment."Given face-to-face testing restrictions during the COVID pandemic, this, to our knowledge, represents the first cooperative marshmallow study to be conducted online, thereby adding to the growing body of literature concerning the validity of remote testing methods," they wrote. The type of treat was chosen by each child's parents, ensuring it was a favorite: chocolate, candy, biscuits, and marshmallows, mostly, although three kids loved potato chips, fruit, and nuts, respectively. Parents were asked to set up the experiment in a quiet room with minimal potential distractions, outfitted with a webcam to monitor the experiment. Each child was shown a video of a "confederate child" who either clearly promised not to eat the treat or more ambiguously suggested they might succumb and eat their treat.Then the scientist running the experiment would leave the Zoom meeting for an undisclosed period of time, after telling the child that if both of them resisted eating the treat, they would each receive a second one; if one of them failed, neither would be rewarded. Children could not see or communicate with their paired confederates for the duration of the experiment. The scientist returned after ten minutes to see if the child had managed to delay gratification. Once the experiment had ended, the team actually did reward the participant child regardless of the outcome, "to end the study on a positive note." The results were controlled for unavoidable accidental distractions, so the paper includes the results from both the full dataset of all 68 participants and a subset of 48 children, excluding those who experienced some type of disruption during the ten-minute experiment. In both cases, children whose confederate clearly promised not to eat their treat waited longer to eat their treat compared to the more ambiguous "social risk" condition. And younger children were slightly more likely to successfully delay gratification than older children, although this result was not statistically significant. The authors suggest this small difference may be due to the fact that older children are more likely to have experienced broken promises, thereby learning "that commitments are not always fulfilled." Of course, there are always caveats. For instance, while specific demographic data was not collected, all the children had predominantly white middle-class backgrounds, so the results reflect how typical children in northern England behave in such situations. The authors would like to see their online experiment repeated cross-culturally in the future. And the limitation of one-way communication "likely prevented partners from establishing common ground, namely their mutual commitment to fulfilling their respective roles, which is thought to be a key principle of interdependence," the authors wrote. DOI: Royal Society Open Science, 2025. 10.1098/rsos.250392  . Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 0 Comments #peers #promise #can #help #kids
    ARSTECHNICA.COM
    A peer’s promise can help kids pass the marshmallow test
    resistance is futile A peer’s promise can help kids pass the marshmallow test Younger children were slightly more likely to successfully delay gratification than older children. Jennifer Ouellette – May 15, 2025 9:46 am | 0 For decades, Walter Mischel's "marshmallow test" was viewed as a key predictor for children's future success, but reality is a bit more nuanced. Credit: Igniter Media For decades, Walter Mischel's "marshmallow test" was viewed as a key predictor for children's future success, but reality is a bit more nuanced. Credit: Igniter Media Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more You've probably heard of the infamous "marshmallow test," in which young children are asked to wait to eat a yummy marshmallow placed in front of them while left alone in a room for 10 to 15 minutes. If they successfully do so, they get a second marshmallow; if not, they don't. The test has become a useful paradigm for scientists interested in studying the various factors that might influence one's ability to delay gratification, thereby promoting social cooperation. According to a paper published in the journal Royal Society Open Science, one factor is trust: If children are paired in a marshmallow test and one promises not to eat their treat for the specified time, the other is much more likely to also refrain from eating it. As previously reported, psychologist Walter Mischel's landmark behavioral study involved 600 kids between the ages of four and six, all culled from Stanford University's Bing Nursery School. He would give each child a marshmallow and give them the option of eating it immediately if they chose. But if they could wait 15 minutes, they would get a second marshmallow as a reward. Then Mischel would leave the room, and a hidden video camera would tape what happened next. Some kids just ate the marshmallow right away. Others found a handy distraction: covering their eyes, kicking the desk, or poking at the marshmallow with their fingers. Some smelled it, licked it, or took tiny nibbles around the edges. Roughly one-third of the kids held out long enough to earn a second marshmallow. Several years later, Mischel noticed a strong correlation between the success of some of those kids later in life (better grades, higher self-confidence) and their ability to delay gratification in nursery school. Mischel's follow-up study confirmed the correlation. Mischel himself cautioned against over-interpreting the results, emphasizing that children who simply can't hold out for that second marshmallow are not necessarily doomed to a life of failure. A more nuanced picture was offered in a 2018 study that replicated the marshmallow test with preschoolers. It found the same correlation between later achievement and the ability to resist temptation in preschool, but that correlation was much less significant after the researchers factored in such aspects as family background, home environment, and so forth. Attentiveness might be yet another contributing factor, according to a 2019 paper. There have also been several studies examining the effects of social interdependence and similar social contexts on children's ability to delay gratification, using variations of the marshmallow test paradigm. For instance, in 2020, a team of German researchers adapted the classic experimental setup using Oreos and vanilla cookies with German and Kenyan schoolchildren, respectively. If both children waited to eat their treat, they received a second cookie as a reward; if one did not wait, neither child received a second cookie. They found that the kids were more likely to delay gratification when they depended on each other, compared to the standard marshmallow test. An online paradigm Rebecca Koomen, a psychologist now at the University of Manchester, co-authored the 2020 study as well as this latest one, which sought to build on those findings. Koomen et al. structured their experiments similarly, this time recruiting 66 UK children, ages five to six, as subjects. They focused on how promising a partner not to eat a favorite treat could inspire sufficient trust to delay gratification, compared to the social risk of one or both partners breaking that promise. Any parent could tell you that children of this age are really big on the importance of promises, and science largely concurs; a promise has been shown to enhance interdependent cooperation in this age group. Koomen and her Manchester colleagues added an extra twist: They conducted their version of the marshmallow test online to test the effectiveness compared to lab-based versions of the experiment. (Prior results from similar online studies have been mixed.) "Given face-to-face testing restrictions during the COVID pandemic, this, to our knowledge, represents the first cooperative marshmallow study to be conducted online, thereby adding to the growing body of literature concerning the validity of remote testing methods," they wrote. The type of treat was chosen by each child's parents, ensuring it was a favorite: chocolate, candy, biscuits, and marshmallows, mostly, although three kids loved potato chips, fruit, and nuts, respectively. Parents were asked to set up the experiment in a quiet room with minimal potential distractions, outfitted with a webcam to monitor the experiment. Each child was shown a video of a "confederate child" who either clearly promised not to eat the treat or more ambiguously suggested they might succumb and eat their treat. (The confederate child refrained from eating the treat in both conditions, although the participant child did not know that.) Then the scientist running the experiment would leave the Zoom meeting for an undisclosed period of time, after telling the child that if both of them resisted eating the treat (including licking or nibbling at it), they would each receive a second one; if one of them failed, neither would be rewarded. Children could not see or communicate with their paired confederates for the duration of the experiment. The scientist returned after ten minutes to see if the child had managed to delay gratification. Once the experiment had ended, the team actually did reward the participant child regardless of the outcome, "to end the study on a positive note." The results were controlled for unavoidable accidental distractions, so the paper includes the results from both the full dataset of all 68 participants and a subset of 48 children, excluding those who experienced some type of disruption during the ten-minute experiment. In both cases, children whose confederate clearly promised not to eat their treat waited longer to eat their treat compared to the more ambiguous "social risk" condition. And younger children were slightly more likely to successfully delay gratification than older children, although this result was not statistically significant. The authors suggest this small difference may be due to the fact that older children are more likely to have experienced broken promises, thereby learning "that commitments are not always fulfilled." Of course, there are always caveats. For instance, while specific demographic data was not collected, all the children had predominantly white middle-class backgrounds, so the results reflect how typical children in northern England behave in such situations. The authors would like to see their online experiment repeated cross-culturally in the future. And the limitation of one-way communication "likely prevented partners from establishing common ground, namely their mutual commitment to fulfilling their respective roles, which is thought to be a key principle of interdependence," the authors wrote. DOI: Royal Society Open Science, 2025. 10.1098/rsos.250392  (About DOIs). Jennifer Ouellette Senior Writer Jennifer Ouellette Senior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 0 Comments
    0 Commentaires 0 Parts