• IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Σχόλια 0 Μοιράστηκε
  • CERT Director Greg Touhill: To Lead Is to Serve

    Greg Touhill, director of the Software Engineering’s Institute’sComputer Emergency Response Teamdivision is an atypical technology leader. For one thing, he’s been in tech and other leadership positions that span the US Air Force, the US government, the private sector and now SEI’s CERT. More importantly, he’s been a major force in the cybersecurity realm, making the world a safer place and even saving lives. Touhill earned a bachelor’s degree from the Pennsylvania State University, a master’s degree from the University of Southern California, a master’s degree from the Air War College, was a senior executive fellow at the Harvard University Kennedy School of Government and completed executive education studies at the University of North Carolina. “I was a student intern at Carnegie Mellon, but I was going to college at Penn State and studying chemical engineering. As an Air Force ROTC scholarship recipient, I knew I was going to become an Air Force officer but soon realized that I didn’t necessarily want to be a chemical engineer in the Air Force,” says Touhill. “Because I passed all the mathematics, physics, and engineering courses, I ended up becoming a communications, electronics, and computer systems officer in the Air Force. I spent 30 years, one month and three days on active duty in the United States Air Force, eventually retiring as a brigadier general and having done many different types of jobs that were available to me within and even beyond my career field.” Related:Specifically, he was an operational commander at the squadron, group, and wing levels. For example, as a colonel, Touhill served as director of command, control, communications and computersfor the United States Central Command Forces, then he was appointed chief information officer and director, communications and information at Air Mobility Command. Later, he served as commander, 81st Training Wing at Kessler Air Force Base where he was promoted to brigadier general and commanded over 12,500 personnel. After that, he served as the senior defense officer and US defense attaché at the US Embassy in Kuwait, before concluding his military career as the chief information officer and director, C4 systems at the US Transportation Command, one of 10 US combatant commands, where he and his team were awarded the NSA Rowlett Award for the best cybersecurity program in the government. While in the Air Force, Touhill received numerous awards and decorations including the Bronze Star medal and the Air Force Science and Engineering Award. He is the only three-time recipient of the USAF C4 Professionalism Award. Related:Greg Touhill“I got to serve at major combatant commands, work with coalition partners from many different countries and represented the US as part of a diplomatic mission to Kuwait for two years as the senior defense official at a time when America was withdrawing forces out of Iraq. I also led the negotiation of a new bilateral defense agreement with the Kuwaitis,” says Touhill. “Then I was recruited to continue my service and was asked to serve as the deputy assistant secretary of cybersecurity and communications at the Department of Homeland Security, where I ran the operations of what is now known as the Cybersecurity and Infrastructure Security Agency. I was there at a pivotal moment because we were building up the capacity of that organization and setting the stage for it to become its own agency.” While at DHS, there were many noteworthy breaches including the infamous US Office of People Managementbreach. Those events led to Obama’s visit to the National Cybersecurity and Communications Integration Center.  “I got to brief the president on the state of cybersecurity, what we had seen with the OPM breach and some other deficiencies,” says Touhill. “I was on the federal CIO council as the cybersecurity advisor to that since I’d been a federal CIO before and I got to conclude my federal career by being the first United States government chief information security officer. From there, I pivoted to industry, but I also got to return to Carnegie Mellon as a faculty member at Carnegie Mellon’s Heinz College, where I've been teaching since January 2017.” Related:Touhill has been involved in three startups, two of which were successfully acquired. He also served on three Fortune 100 advisory boards and on the Information Systems Audit and Control Association board, eventually becoming its chair for a term during the seven years he served there. Touhill just celebrated his fourth year at CERT, which he considers the pinnacle of the cybersecurity profession and everything he’s done to date. “Over my career I've led teams that have done major software builds in the national security space. I've also been the guy who's pulled cables and set up routers, hubs and switches, and I've been a system administrator. I've done everything that I could do from the keyboard up all the way up to the White House,” says Touhill. “For 40 years, the Software Engineering Institute has been leading the world in secure by design, cybersecurity, software engineering, artificial intelligence and engineering, pioneering best practices, and figuring out how to make the world a safer more secure and trustworthy place. I’ve had a hand in the making of today’s modern military and government information technology environment, beginning as a 22-year-old lieutenant, and hope to inspire the next generation to do even better.” What ‘Success’ Means Many people would be satisfied with their careers as a brigadier general, a tech leader, the White House’s first anything, or working at CERT, let alone running it. Touhill has spent his entire career making the world a safer place, so it’s not surprising that he considers his greatest achievement saving lives. “In the Middle East and Iraq, convoys were being attacked with improvised explosive devices. There were also ‘direct fire’ attacks where people are firing weapons at you and indirect fire attacks where you could be in the line of fire,” says Touhill. “The convoys were using SINCGARS line-of-site walkie-talkies for communications that are most effective when the ground is flat, and Iraq is not flat. As a result, our troops were at risk of not having reliable communications while under attack. As my team brainstormed options to remedy the situation, one of my guys found some technology, about the size of an iPhone, that could covert a radio signal, which is basically a waveform, into a digital pulse I could put on a dedicated network to support the convoy missions.” For million, Touhill and his team quickly architected, tested, and fielded the Radio over IP networkthat had a 99% reliability rate anywhere in Iraq. Better still, convoys could communicate over the network using any radios. That solution saved a minimum of six lives. In one case, the hospital doctor said if the patient had arrived five minutes later, he would have died. Sage Advice Anyone who has ever spent time in the military or in a military family knows that soldiers are very well disciplined, or they wash out. Other traits include being physically fit, mentally fit, and achieving balance in life, though that’s difficult to achieve in combat. Still, it’s a necessity. “I served three and a half years down range in combat operations. My experience taught me you could be doing 20-hour days for a year or two on end. If you haven’t built a good foundation of being disciplined and fit, it impacts your ability to maintain presence in times of stress, and CISOs work in stressful situations,” says Touhill. “Staying fit also fortifies you for the long haul, so you don’t get burned out as fast.” Another necessary skill is the ability to work well with others.  “Cybersecurity is an interdisciplinary practice. One of the great joys I have as CERT director is the wide range of experts in many different fields that include software engineers, computer engineers, computer scientists, data scientists, mathematicians and physicists,” says Touhill. “I have folks who have business degrees and others who have philosophy degrees. It's really a rich community of interests all coming together towards that common goal of making the world a safer, more secure and more trusted place in the cyber domain. We’re are kind of like the cyber neighborhood watch for the whole world.” He also says that money isn’t everything, having taken a pay cut to go from being an Air Force brigadier general to the deputy assistant secretary of the Department of Homeland Security . “You’ll always do well if you pick the job that matters most. That’s what I did, and I’ve been rewarded every step,” says Touhill.  The biggest challenge he sees is the complexity of cyber systems and software, which can have second, third, and fourth order effects.  “Complexity raises the cost of the attack surface, increases the attack surface, raises the number of vulnerabilities and exploits human weaknesses,” says Touhill. “The No. 1 thing we need to be paying attention to is privacy when it comes to AI because AI can unearth and discover knowledge from data we already have. While it gives us greater insights at greater velocities, we need to be careful that we take precautions to better protect our privacy, civil rights and civil liberties.” 
    #cert #director #greg #touhill #lead
    CERT Director Greg Touhill: To Lead Is to Serve
    Greg Touhill, director of the Software Engineering’s Institute’sComputer Emergency Response Teamdivision is an atypical technology leader. For one thing, he’s been in tech and other leadership positions that span the US Air Force, the US government, the private sector and now SEI’s CERT. More importantly, he’s been a major force in the cybersecurity realm, making the world a safer place and even saving lives. Touhill earned a bachelor’s degree from the Pennsylvania State University, a master’s degree from the University of Southern California, a master’s degree from the Air War College, was a senior executive fellow at the Harvard University Kennedy School of Government and completed executive education studies at the University of North Carolina. “I was a student intern at Carnegie Mellon, but I was going to college at Penn State and studying chemical engineering. As an Air Force ROTC scholarship recipient, I knew I was going to become an Air Force officer but soon realized that I didn’t necessarily want to be a chemical engineer in the Air Force,” says Touhill. “Because I passed all the mathematics, physics, and engineering courses, I ended up becoming a communications, electronics, and computer systems officer in the Air Force. I spent 30 years, one month and three days on active duty in the United States Air Force, eventually retiring as a brigadier general and having done many different types of jobs that were available to me within and even beyond my career field.” Related:Specifically, he was an operational commander at the squadron, group, and wing levels. For example, as a colonel, Touhill served as director of command, control, communications and computersfor the United States Central Command Forces, then he was appointed chief information officer and director, communications and information at Air Mobility Command. Later, he served as commander, 81st Training Wing at Kessler Air Force Base where he was promoted to brigadier general and commanded over 12,500 personnel. After that, he served as the senior defense officer and US defense attaché at the US Embassy in Kuwait, before concluding his military career as the chief information officer and director, C4 systems at the US Transportation Command, one of 10 US combatant commands, where he and his team were awarded the NSA Rowlett Award for the best cybersecurity program in the government. While in the Air Force, Touhill received numerous awards and decorations including the Bronze Star medal and the Air Force Science and Engineering Award. He is the only three-time recipient of the USAF C4 Professionalism Award. Related:Greg Touhill“I got to serve at major combatant commands, work with coalition partners from many different countries and represented the US as part of a diplomatic mission to Kuwait for two years as the senior defense official at a time when America was withdrawing forces out of Iraq. I also led the negotiation of a new bilateral defense agreement with the Kuwaitis,” says Touhill. “Then I was recruited to continue my service and was asked to serve as the deputy assistant secretary of cybersecurity and communications at the Department of Homeland Security, where I ran the operations of what is now known as the Cybersecurity and Infrastructure Security Agency. I was there at a pivotal moment because we were building up the capacity of that organization and setting the stage for it to become its own agency.” While at DHS, there were many noteworthy breaches including the infamous US Office of People Managementbreach. Those events led to Obama’s visit to the National Cybersecurity and Communications Integration Center.  “I got to brief the president on the state of cybersecurity, what we had seen with the OPM breach and some other deficiencies,” says Touhill. “I was on the federal CIO council as the cybersecurity advisor to that since I’d been a federal CIO before and I got to conclude my federal career by being the first United States government chief information security officer. From there, I pivoted to industry, but I also got to return to Carnegie Mellon as a faculty member at Carnegie Mellon’s Heinz College, where I've been teaching since January 2017.” Related:Touhill has been involved in three startups, two of which were successfully acquired. He also served on three Fortune 100 advisory boards and on the Information Systems Audit and Control Association board, eventually becoming its chair for a term during the seven years he served there. Touhill just celebrated his fourth year at CERT, which he considers the pinnacle of the cybersecurity profession and everything he’s done to date. “Over my career I've led teams that have done major software builds in the national security space. I've also been the guy who's pulled cables and set up routers, hubs and switches, and I've been a system administrator. I've done everything that I could do from the keyboard up all the way up to the White House,” says Touhill. “For 40 years, the Software Engineering Institute has been leading the world in secure by design, cybersecurity, software engineering, artificial intelligence and engineering, pioneering best practices, and figuring out how to make the world a safer more secure and trustworthy place. I’ve had a hand in the making of today’s modern military and government information technology environment, beginning as a 22-year-old lieutenant, and hope to inspire the next generation to do even better.” What ‘Success’ Means Many people would be satisfied with their careers as a brigadier general, a tech leader, the White House’s first anything, or working at CERT, let alone running it. Touhill has spent his entire career making the world a safer place, so it’s not surprising that he considers his greatest achievement saving lives. “In the Middle East and Iraq, convoys were being attacked with improvised explosive devices. There were also ‘direct fire’ attacks where people are firing weapons at you and indirect fire attacks where you could be in the line of fire,” says Touhill. “The convoys were using SINCGARS line-of-site walkie-talkies for communications that are most effective when the ground is flat, and Iraq is not flat. As a result, our troops were at risk of not having reliable communications while under attack. As my team brainstormed options to remedy the situation, one of my guys found some technology, about the size of an iPhone, that could covert a radio signal, which is basically a waveform, into a digital pulse I could put on a dedicated network to support the convoy missions.” For million, Touhill and his team quickly architected, tested, and fielded the Radio over IP networkthat had a 99% reliability rate anywhere in Iraq. Better still, convoys could communicate over the network using any radios. That solution saved a minimum of six lives. In one case, the hospital doctor said if the patient had arrived five minutes later, he would have died. Sage Advice Anyone who has ever spent time in the military or in a military family knows that soldiers are very well disciplined, or they wash out. Other traits include being physically fit, mentally fit, and achieving balance in life, though that’s difficult to achieve in combat. Still, it’s a necessity. “I served three and a half years down range in combat operations. My experience taught me you could be doing 20-hour days for a year or two on end. If you haven’t built a good foundation of being disciplined and fit, it impacts your ability to maintain presence in times of stress, and CISOs work in stressful situations,” says Touhill. “Staying fit also fortifies you for the long haul, so you don’t get burned out as fast.” Another necessary skill is the ability to work well with others.  “Cybersecurity is an interdisciplinary practice. One of the great joys I have as CERT director is the wide range of experts in many different fields that include software engineers, computer engineers, computer scientists, data scientists, mathematicians and physicists,” says Touhill. “I have folks who have business degrees and others who have philosophy degrees. It's really a rich community of interests all coming together towards that common goal of making the world a safer, more secure and more trusted place in the cyber domain. We’re are kind of like the cyber neighborhood watch for the whole world.” He also says that money isn’t everything, having taken a pay cut to go from being an Air Force brigadier general to the deputy assistant secretary of the Department of Homeland Security . “You’ll always do well if you pick the job that matters most. That’s what I did, and I’ve been rewarded every step,” says Touhill.  The biggest challenge he sees is the complexity of cyber systems and software, which can have second, third, and fourth order effects.  “Complexity raises the cost of the attack surface, increases the attack surface, raises the number of vulnerabilities and exploits human weaknesses,” says Touhill. “The No. 1 thing we need to be paying attention to is privacy when it comes to AI because AI can unearth and discover knowledge from data we already have. While it gives us greater insights at greater velocities, we need to be careful that we take precautions to better protect our privacy, civil rights and civil liberties.”  #cert #director #greg #touhill #lead
    WWW.INFORMATIONWEEK.COM
    CERT Director Greg Touhill: To Lead Is to Serve
    Greg Touhill, director of the Software Engineering’s Institute’s (SEI’s) Computer Emergency Response Team (CERT) division is an atypical technology leader. For one thing, he’s been in tech and other leadership positions that span the US Air Force, the US government, the private sector and now SEI’s CERT. More importantly, he’s been a major force in the cybersecurity realm, making the world a safer place and even saving lives. Touhill earned a bachelor’s degree from the Pennsylvania State University, a master’s degree from the University of Southern California, a master’s degree from the Air War College, was a senior executive fellow at the Harvard University Kennedy School of Government and completed executive education studies at the University of North Carolina. “I was a student intern at Carnegie Mellon, but I was going to college at Penn State and studying chemical engineering. As an Air Force ROTC scholarship recipient, I knew I was going to become an Air Force officer but soon realized that I didn’t necessarily want to be a chemical engineer in the Air Force,” says Touhill. “Because I passed all the mathematics, physics, and engineering courses, I ended up becoming a communications, electronics, and computer systems officer in the Air Force. I spent 30 years, one month and three days on active duty in the United States Air Force, eventually retiring as a brigadier general and having done many different types of jobs that were available to me within and even beyond my career field.” Related:Specifically, he was an operational commander at the squadron, group, and wing levels. For example, as a colonel, Touhill served as director of command, control, communications and computers (C4) for the United States Central Command Forces, then he was appointed chief information officer and director, communications and information at Air Mobility Command. Later, he served as commander, 81st Training Wing at Kessler Air Force Base where he was promoted to brigadier general and commanded over 12,500 personnel. After that, he served as the senior defense officer and US defense attaché at the US Embassy in Kuwait, before concluding his military career as the chief information officer and director, C4 systems at the US Transportation Command, one of 10 US combatant commands, where he and his team were awarded the NSA Rowlett Award for the best cybersecurity program in the government. While in the Air Force, Touhill received numerous awards and decorations including the Bronze Star medal and the Air Force Science and Engineering Award. He is the only three-time recipient of the USAF C4 Professionalism Award. Related:Greg Touhill“I got to serve at major combatant commands, work with coalition partners from many different countries and represented the US as part of a diplomatic mission to Kuwait for two years as the senior defense official at a time when America was withdrawing forces out of Iraq. I also led the negotiation of a new bilateral defense agreement with the Kuwaitis,” says Touhill. “Then I was recruited to continue my service and was asked to serve as the deputy assistant secretary of cybersecurity and communications at the Department of Homeland Security, where I ran the operations of what is now known as the Cybersecurity and Infrastructure Security Agency. I was there at a pivotal moment because we were building up the capacity of that organization and setting the stage for it to become its own agency.” While at DHS, there were many noteworthy breaches including the infamous US Office of People Management (OPM) breach. Those events led to Obama’s visit to the National Cybersecurity and Communications Integration Center.  “I got to brief the president on the state of cybersecurity, what we had seen with the OPM breach and some other deficiencies,” says Touhill. “I was on the federal CIO council as the cybersecurity advisor to that since I’d been a federal CIO before and I got to conclude my federal career by being the first United States government chief information security officer. From there, I pivoted to industry, but I also got to return to Carnegie Mellon as a faculty member at Carnegie Mellon’s Heinz College, where I've been teaching since January 2017.” Related:Touhill has been involved in three startups, two of which were successfully acquired. He also served on three Fortune 100 advisory boards and on the Information Systems Audit and Control Association board, eventually becoming its chair for a term during the seven years he served there. Touhill just celebrated his fourth year at CERT, which he considers the pinnacle of the cybersecurity profession and everything he’s done to date. “Over my career I've led teams that have done major software builds in the national security space. I've also been the guy who's pulled cables and set up routers, hubs and switches, and I've been a system administrator. I've done everything that I could do from the keyboard up all the way up to the White House,” says Touhill. “For 40 years, the Software Engineering Institute has been leading the world in secure by design, cybersecurity, software engineering, artificial intelligence and engineering, pioneering best practices, and figuring out how to make the world a safer more secure and trustworthy place. I’ve had a hand in the making of today’s modern military and government information technology environment, beginning as a 22-year-old lieutenant, and hope to inspire the next generation to do even better.” What ‘Success’ Means Many people would be satisfied with their careers as a brigadier general, a tech leader, the White House’s first anything, or working at CERT, let alone running it. Touhill has spent his entire career making the world a safer place, so it’s not surprising that he considers his greatest achievement saving lives. “In the Middle East and Iraq, convoys were being attacked with improvised explosive devices. There were also ‘direct fire’ attacks where people are firing weapons at you and indirect fire attacks where you could be in the line of fire,” says Touhill. “The convoys were using SINCGARS line-of-site walkie-talkies for communications that are most effective when the ground is flat, and Iraq is not flat. As a result, our troops were at risk of not having reliable communications while under attack. As my team brainstormed options to remedy the situation, one of my guys found some technology, about the size of an iPhone, that could covert a radio signal, which is basically a waveform, into a digital pulse I could put on a dedicated network to support the convoy missions.” For $11 million, Touhill and his team quickly architected, tested, and fielded the Radio over IP network (aka “Ripper Net”) that had a 99% reliability rate anywhere in Iraq. Better still, convoys could communicate over the network using any radios. That solution saved a minimum of six lives. In one case, the hospital doctor said if the patient had arrived five minutes later, he would have died. Sage Advice Anyone who has ever spent time in the military or in a military family knows that soldiers are very well disciplined, or they wash out. Other traits include being physically fit, mentally fit, and achieving balance in life, though that’s difficult to achieve in combat. Still, it’s a necessity. “I served three and a half years down range in combat operations. My experience taught me you could be doing 20-hour days for a year or two on end. If you haven’t built a good foundation of being disciplined and fit, it impacts your ability to maintain presence in times of stress, and CISOs work in stressful situations,” says Touhill. “Staying fit also fortifies you for the long haul, so you don’t get burned out as fast.” Another necessary skill is the ability to work well with others.  “Cybersecurity is an interdisciplinary practice. One of the great joys I have as CERT director is the wide range of experts in many different fields that include software engineers, computer engineers, computer scientists, data scientists, mathematicians and physicists,” says Touhill. “I have folks who have business degrees and others who have philosophy degrees. It's really a rich community of interests all coming together towards that common goal of making the world a safer, more secure and more trusted place in the cyber domain. We’re are kind of like the cyber neighborhood watch for the whole world.” He also says that money isn’t everything, having taken a pay cut to go from being an Air Force brigadier general to the deputy assistant secretary of the Department of Homeland Security . “You’ll always do well if you pick the job that matters most. That’s what I did, and I’ve been rewarded every step,” says Touhill.  The biggest challenge he sees is the complexity of cyber systems and software, which can have second, third, and fourth order effects.  “Complexity raises the cost of the attack surface, increases the attack surface, raises the number of vulnerabilities and exploits human weaknesses,” says Touhill. “The No. 1 thing we need to be paying attention to is privacy when it comes to AI because AI can unearth and discover knowledge from data we already have. While it gives us greater insights at greater velocities, we need to be careful that we take precautions to better protect our privacy, civil rights and civil liberties.” 
    0 Σχόλια 0 Μοιράστηκε