-
- EXPLORE
-
-
-
-
ECT News Network delivers e-commerce and technology news, reviews, and analyses on The E-Commerce Times, TechNewsWorld, CRM Buyer, and LinuxInsider. Stay informed by subscribing to our newsletters: https://www.ectnews.com/about/newsletters
Recent Updates
-
IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
By John P. Mello Jr.
June 11, 2025 5:00 AM PT
IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
Enterprise IT Lead Generation Services
Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.
IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
“IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
“They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
Realistic Roadmap
Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
“Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
“Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
“IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
“IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
“That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
Solving the Quantum Error Correction Puzzle
To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
“Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.
Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
According to IBM, a practical fault-tolerant quantum architecture must:
Suppress enough errors for useful algorithms to succeed
Prepare and measure logical qubits during computation
Apply universal instructions to logical qubits
Decode measurements from logical qubits in real time and guide subsequent operations
Scale modularly across hundreds or thousands of logical qubits
Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources
Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
“Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
Q-Day Approaching Faster Than Expected
For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
“This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
“IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”
“Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
“It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
“It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
“Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
“It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in Emerging Tech
#ibm #plans #largescale #faulttolerant #quantumIBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantumWWW.TECHNEWSWORLD.COMIBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech0 Comments 0 SharesPlease log in to like, share and comment! -
From Networks to Business Models, AI Is Rewiring Telecom
Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services.
As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry.
Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental.
AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint.
AI Is Reshaping Wireless Networks Already
Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack.
AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models.
Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time.
AI Acceleration Will Outpace Past Tech Shifts
Many may underestimate the speed and magnitude of AI-driven change.
The shift from traditional voice and data systems to AI-driven network intelligence is already underway.
Although predictions abound, the true scope remains unclear.
It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise.
Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board.
Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined.
History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries.
Technological shifts bring both new opportunities and complex trade-offs.
AI Disruption Will Move Faster Than Ever
The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed.
Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway.
As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other.
Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward.
AI Will Reshape All Sectors and Companies
This shift will unfold faster than most organizations or individuals are prepared to handle.
Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries.
Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption.
Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage.
As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name.
It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries.
SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption.
The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers.
Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives.
No Industry Is Immune From AI Disruption
AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp.
New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent.
Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets.
The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.
#networks #business #models #rewiring #telecomFrom Networks to Business Models, AI Is Rewiring TelecomArtificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow. #networks #business #models #rewiring #telecomFrom Networks to Business Models, AI Is Rewiring TelecomArtificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access (FWA), and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.0 Comments 0 Shares -
Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
By John P. Mello Jr.
June 3, 2025 5:00 AM PT
ADVERTISEMENT
Quality Leads That Turn Into Deals
Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now.
Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research.
Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel.
Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature.
It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure.
Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained.
“Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore.
“They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.”
Sensor Magic
“The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La.
“In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.”
Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld.
“Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.”
“In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.”
Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed.
Better Data Collection
The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems.
“Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.”
“The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added.
Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld.
“As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said.
Drone Adoption Faces Regulatory Hurdles
While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization.
Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities.
“Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.”
“Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld.
“For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.”
Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.”
“We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in Emerging Tech
#drones #set #deliver #benefits #laborintensiveDrones Set To Deliver Benefits for Labor-Intensive Industries: ForresterDrones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #drones #set #deliver #benefits #laborintensiveWWW.TECHNEWSWORLD.COMDrones Set To Deliver Benefits for Labor-Intensive Industries: ForresterDrones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes and [optimal] usage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight [BVLOS] — are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech -
IT Pros ‘Extremely Worried’ About Shadow AI: Report
IT Pros ‘Extremely Worried’ About Shadow AI: Report
By John P. Mello Jr.
June 4, 2025 5:00 AM PT
ADVERTISEMENT
Enterprise IT Lead Generation Services
Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.
Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday.
The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint.
“As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report.
“Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld.
Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval.
“Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained.
“The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.”
Shadow AI Introduces Security Blind Spots
Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
“Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.”
“The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif.
“Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.”
Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways.
“Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.”
Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said.
“Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.”
Shadow AI Everywhere and Easy To Use
Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees.
“In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.”
“That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.”
Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.”
“These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.”
“The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif.
“Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.”
Banning AI Tools Backfires
To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned.
“Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.”
In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said.
“It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.”
“Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.”
He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved.
“Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in IT Leadership
#pros #extremely #worried #about #shadowIT Pros ‘Extremely Worried’ About Shadow AI: ReportIT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership #pros #extremely #worried #about #shadowWWW.TECHNEWSWORLD.COMIT Pros ‘Extremely Worried’ About Shadow AI: ReportIT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT pros (46%) were “extremely worried” about shadow AI, and almost all of them (90%) were concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution [instantly]. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership -
Security Is Not Privacy, Part 1: The Mobile Target
In technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.”
I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them.
So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.”
Security vs. Privacy: Definitions That Matter
In the context of information technology, what exactly are security and privacy?
Security is the property of denying unauthorized parties from accessing or altering your data.
Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities.
As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples.
Let’s start with an instance where security applies, but privacy does not.
Spotify uses digital rights managementsoftware to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your devicejust as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application.
The data on Spotifyare secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data.
A more complex example of security without privacy is social media.
When you sign up for a social media platform, you accept an end-user license agreementauthorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties.
In other words, if everyone who is allowedto have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data.
On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control.
Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data, it is secure by definition because only authorized parties can access your data.
Mobile Devices: Secure but Not Private
Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is.
One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure.
Why is that?
Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source.
With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature.
Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored.
However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied.
Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes.
Mobile Security Isn’t Designed With You in Mind
The most powerful mobile security control is the denial of root privileges to all applications and users. While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either.
By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor.
I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary.
Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy.
Please stay tuned.
#security #not #privacy #part #mobileSecurity Is Not Privacy, Part 1: The Mobile TargetIn technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.” I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them. So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.” Security vs. Privacy: Definitions That Matter In the context of information technology, what exactly are security and privacy? Security is the property of denying unauthorized parties from accessing or altering your data. Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities. As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples. Let’s start with an instance where security applies, but privacy does not. Spotify uses digital rights managementsoftware to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your devicejust as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application. The data on Spotifyare secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data. A more complex example of security without privacy is social media. When you sign up for a social media platform, you accept an end-user license agreementauthorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties. In other words, if everyone who is allowedto have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data. On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control. Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data, it is secure by definition because only authorized parties can access your data. Mobile Devices: Secure but Not Private Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is. One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure. Why is that? Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source. With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature. Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored. However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied. Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes. Mobile Security Isn’t Designed With You in Mind The most powerful mobile security control is the denial of root privileges to all applications and users. While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either. By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor. I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary. Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy. Please stay tuned. #security #not #privacy #part #mobileSecurity Is Not Privacy, Part 1: The Mobile TargetIn technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.” I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them. So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.” Security vs. Privacy: Definitions That Matter In the context of information technology, what exactly are security and privacy? Security is the property of denying unauthorized parties from accessing or altering your data. Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities. As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples. Let’s start with an instance where security applies, but privacy does not. Spotify uses digital rights management (DRM) software to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your device (at least temporarily) just as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application. The data on Spotify (audio files) are secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data. A more complex example of security without privacy is social media. When you sign up for a social media platform, you accept an end-user license agreement (EULA) authorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties. In other words, if everyone who is allowed (by agreement) to have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data. On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control. Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data (or doesn’t), it is secure by definition because only authorized parties can access your data. Mobile Devices: Secure but Not Private Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is. One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure. Why is that? Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source. With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature. Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored. However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied. Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes. Mobile Security Isn’t Designed With You in Mind The most powerful mobile security control is the denial of root privileges to all applications and users (besides, again, the OS itself). While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either. By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor. I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary. Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy. Please stay tuned.0 Comments 0 Shares -
DexCare AI Platform Tackles Health Care Access, Cost Crisis
Care management platform DexCare is applying artificial intelligencein an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure.
As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learningto health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems.
No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience.
The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources.
Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap.
“Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld.
Improving Patient Access With Predictive AI
Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another.
Derek Streat, Co-founder of DexCare
With the goal of a better customer experience, DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat.
He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options.
“It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.”
Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system.
From Providence Prototype to Scalable Solution
Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System.
After four years of development and validation, he launched the technology for broader use across the health care industry.
“It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said.
Digital Marquee for Consumers, Service Management for Providers
DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks.
Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources.
“It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat.
The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization.
How DexCare Integrates With CRM Platforms
According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle.
“We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered.
Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies.
For instance, fulfillment technologies book appointments and supply synchronous virtual solutions.
“Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat.
He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record, which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans.
“The application programming interfacegives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat.
AI’s Long-Term Role in Health Care Access
Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end.
According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said.
He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network.
“That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said.
How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers.
“Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.”
Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised.
Why Generative AI Lags in Health Care
Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems.
He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale.
“Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said.
Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent.
“We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded.
#dexcare #platform #tackles #health #careDexCare AI Platform Tackles Health Care Access, Cost CrisisCare management platform DexCare is applying artificial intelligencein an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure. As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learningto health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems. No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience. The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources. Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap. “Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld. Improving Patient Access With Predictive AI Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another. Derek Streat, Co-founder of DexCare With the goal of a better customer experience, DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat. He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options. “It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.” Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system. From Providence Prototype to Scalable Solution Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System. After four years of development and validation, he launched the technology for broader use across the health care industry. “It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said. Digital Marquee for Consumers, Service Management for Providers DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks. Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources. “It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat. The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization. How DexCare Integrates With CRM Platforms According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle. “We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered. Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies. For instance, fulfillment technologies book appointments and supply synchronous virtual solutions. “Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat. He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record, which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans. “The application programming interfacegives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat. AI’s Long-Term Role in Health Care Access Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end. According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said. He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network. “That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said. How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers. “Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.” Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised. Why Generative AI Lags in Health Care Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems. He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale. “Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said. Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent. “We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded. #dexcare #platform #tackles #health #careWWW.TECHNEWSWORLD.COMDexCare AI Platform Tackles Health Care Access, Cost CrisisCare management platform DexCare is applying artificial intelligence (AI) in an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure. As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learning (ML) to health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems. No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience. The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources. Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap. “Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld. Improving Patient Access With Predictive AI Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another. Derek Streat, Co-founder of DexCare With the goal of a better customer experience (CX), DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat. He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options. “It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.” Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system. From Providence Prototype to Scalable Solution Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System. After four years of development and validation, he launched the technology for broader use across the health care industry. “It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said. Digital Marquee for Consumers, Service Management for Providers DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks. Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources. “It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat. The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization. How DexCare Integrates With CRM Platforms According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle. “We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered. Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies. For instance, fulfillment technologies book appointments and supply synchronous virtual solutions. “Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat. He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record (EMR), which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans. “The application programming interface [API] gives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat. AI’s Long-Term Role in Health Care Access Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end. According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said. He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network. “That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said. How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers. “Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.” Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised. Why Generative AI Lags in Health Care Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems. He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale. “Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said. Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent. “We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded.0 Comments 0 Shares -
AMD at Computex 2025: Making the Case for an AI Powerhouse
With sweeping product announcements across GPUs, CPUs, and AI PCs, AMD is signaling that its transformation from a high-performance computing stalwart to a full-spectrum AI leader is well underway. The post AMD at Computex 2025: Making the Case for an AI Powerhouse appeared first on TechNewsWorld.
#amd #computex #making #case #powerhouseAMD at Computex 2025: Making the Case for an AI PowerhouseWith sweeping product announcements across GPUs, CPUs, and AI PCs, AMD is signaling that its transformation from a high-performance computing stalwart to a full-spectrum AI leader is well underway. The post AMD at Computex 2025: Making the Case for an AI Powerhouse appeared first on TechNewsWorld. #amd #computex #making #case #powerhouseWWW.TECHNEWSWORLD.COMAMD at Computex 2025: Making the Case for an AI PowerhouseWith sweeping product announcements across GPUs, CPUs, and AI PCs, AMD is signaling that its transformation from a high-performance computing stalwart to a full-spectrum AI leader is well underway. The post AMD at Computex 2025: Making the Case for an AI Powerhouse appeared first on TechNewsWorld.0 Comments 0 Shares -
Cell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey
Cell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey
By John P. Mello Jr.
May 21, 2025 5:00 AM PT
ADVERTISEMENT
Proven Tactics to Scale SMB Software Companies in Competitive Markets
Gain market share, boost customer acquisition, and improve operational strength. Get the SMB Software Playbook for Expansion & Growth now -- essential reading for growing tech firms. Free Download.
What a difference a year makes. Twelve months ago, cell phone satisfaction was riding high in the American Customer Satisfaction Index, which surveys U.S. consumers. This year, it has hit an all-time low.
The ACSI, a national economic indicator for over 25 years, reported Tuesday that after reaching an all-time high in 2024, cell phone satisfaction fell to its lowest point in a decade, scoring 78 on a scale of 100.
“Brands keep racing to add new capabilities, yet customers still judge smartphones by the fundamentals,” Forrest Morgeson, an associate professor of marketing at Michigan State University and Director of Research Emeritus at the ACSI, said in a statement.
“Only when companies strengthen the essentials — battery life, call reliability, and ease of use — does innovation truly deliver lasting satisfaction,” he continued.
“I totally agree,” added Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, Calif.
“Battery life is the number one issue we see in our smartphone surveys,” he told TechNewsWorld. “And call reliability is always a concern because dropped calls or disconnects during social media sessions are frustrating.”
People are still getting excited about new features, but they still want greater battery life and their phones to be easier to use than before, countered Bryan Cohen, CEO of Opn Communication, a telecommunications agency based in Sheridan, Wyo.
“Take my father. He’s 72 years old, and he wanted an iPhone 16,” he told TechNewsWorld. “I finally went out and got it for him. He got really excited about AI, but then he gets frustrated with it because it’s not easy to use, and he gets mad at the phone.”
Phone Makers Take a Hit
Dissatisfaction with cell phones affected manufacturers’ ratings, too, according to the ACSI study, which was based on 27,494 completed surveys. Both Apple’s and Samsung’s ratings slipped a point to 81, although Samsung had a slight edge over Apple in the 5G phone category. Both, however, had significant leads in satisfaction compared to their nearest rivals, Google and Motorola, which slid three points to 75.
The ACSI researchers also found a widening gap in satisfaction between owners of 5G and non-5G phones. Satisfaction with 5G phones fell two points but still posted a respectable score of 80. Meanwhile, satisfaction with phones using legacy technology plummeted seven points to 68.
“It’s very important to understand that the mobile networks in the U.S. use different spectrum bands,” explained John Strand of Denmark-based Strand Consulting, a consulting firm with a focus on global telecom.
“If you have an old phone, it may not run so well on all spectrum bands,” he told TechNewsWorld. “It certainly won’t work as well as a new phone with a newer chipset.”
The dissatisfaction can also be due to a technology misunderstanding, added Opn Comm’s Cohen. “People will have a phone for four or five years and not understand their phone might not have been built for 5G,” he explained.
“People expect their LTE phones to automatically go to the next generation,” he continued. “That’s not necessarily the case. Their phone might not be 5G compatible, just like some phones still are not eSIM compatible.”
ISPs See Modest Satisfaction Improvements
On the plus side, the study found that satisfaction with ISPs, including fiber and non-fiber services, ticked up a point to 72. Satisfaction with fiber declined by one point, to 75, the study noted, while non-fiber jumped three points, to 70.
The improved satisfaction rating can be attributed to new investments by the carriers, said Creative Strategies’ Bajarin. “They are gaining new technologies that boost their signal, including some redundancy technologies to make their lines more stable,” he explained.
The study noted that AT&T Fiber is leading the fiber segment in satisfaction, scoring a 78 on the index despite a three-point drop. Hot on the heels of AT&T are Google Fiber and Verizon FiOS, at 76, and Xfinity Fiber, at 75.
A big gainer in the fiber segment was Optimum, which jumped eight points to 71. The ACSI researchers explained that Optimum’s satisfaction burst was driven primarily by its efforts to add value by strengthening the quality of its customer service.
The remaining group of smaller ISPs didn’t fare as well. They dropped nine points to 70. The study noted that “all elements of the fiber customer experience have worsened over the past year, with notable decreases in measures relating to the quality of internet service.”
In the non-fiber segment, T-Mobile gained three points to tie leader AT&T at 78. According to the study, T-Mobile has been successful in improving the consistency of its non-fiber service while adding value through improved customer service and plan options. Not far behind the leaders is Verizon, which saw its satisfaction score jump four points to 77.
Kinetic by Windstream was a big gainer in the non-fiber segment. It surged 11 points to 62. “By making significant improvements in practical service metrics, Windstream drives customer perceptions of the value of its Kinetic service higher,” the study explained.
Wireless Service Satisfaction Slips
Declining satisfaction afflicted the wireless phone service industry, according to the ACSI. Overall, the industry dropped a point to 75. Its segments also saw satisfaction declines: value mobile virtual network operatorsslid three points to 78; mobile network operatorsfell one point to 75; and full-service MVNOs slipped three points to 74.
Individual MNO players in the market experienced similar declines, with T-Mobile dropping one point to 76, AT&T falling five points to 74, and UScellular losing three points to 72. Verizon was the only gainer in the top four, with a one-point increase to 75.
The ACSI researchers explained that in addition to measuring satisfaction with operators, the study measures satisfaction with call quality and network capability. Over the last year, AT&T suffered the largest decrease in both, dropping six points to 77 for call quality and eight points to 76 for network capability.
A new feature of this year’s telecommunication and cell phone report is the addition of smartwatches. The study found that Samsung, with a score of 83, edged Apple Watch, which scored 80 in satisfaction. Fitbit finished third with a score of 72.
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in Smartphones
#cell #phone #satisfaction #tumbles #10yearCell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI SurveyCell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey By John P. Mello Jr. May 21, 2025 5:00 AM PT ADVERTISEMENT Proven Tactics to Scale SMB Software Companies in Competitive Markets Gain market share, boost customer acquisition, and improve operational strength. Get the SMB Software Playbook for Expansion & Growth now -- essential reading for growing tech firms. Free Download. What a difference a year makes. Twelve months ago, cell phone satisfaction was riding high in the American Customer Satisfaction Index, which surveys U.S. consumers. This year, it has hit an all-time low. The ACSI, a national economic indicator for over 25 years, reported Tuesday that after reaching an all-time high in 2024, cell phone satisfaction fell to its lowest point in a decade, scoring 78 on a scale of 100. “Brands keep racing to add new capabilities, yet customers still judge smartphones by the fundamentals,” Forrest Morgeson, an associate professor of marketing at Michigan State University and Director of Research Emeritus at the ACSI, said in a statement. “Only when companies strengthen the essentials — battery life, call reliability, and ease of use — does innovation truly deliver lasting satisfaction,” he continued. “I totally agree,” added Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, Calif. “Battery life is the number one issue we see in our smartphone surveys,” he told TechNewsWorld. “And call reliability is always a concern because dropped calls or disconnects during social media sessions are frustrating.” People are still getting excited about new features, but they still want greater battery life and their phones to be easier to use than before, countered Bryan Cohen, CEO of Opn Communication, a telecommunications agency based in Sheridan, Wyo. “Take my father. He’s 72 years old, and he wanted an iPhone 16,” he told TechNewsWorld. “I finally went out and got it for him. He got really excited about AI, but then he gets frustrated with it because it’s not easy to use, and he gets mad at the phone.” Phone Makers Take a Hit Dissatisfaction with cell phones affected manufacturers’ ratings, too, according to the ACSI study, which was based on 27,494 completed surveys. Both Apple’s and Samsung’s ratings slipped a point to 81, although Samsung had a slight edge over Apple in the 5G phone category. Both, however, had significant leads in satisfaction compared to their nearest rivals, Google and Motorola, which slid three points to 75. The ACSI researchers also found a widening gap in satisfaction between owners of 5G and non-5G phones. Satisfaction with 5G phones fell two points but still posted a respectable score of 80. Meanwhile, satisfaction with phones using legacy technology plummeted seven points to 68. “It’s very important to understand that the mobile networks in the U.S. use different spectrum bands,” explained John Strand of Denmark-based Strand Consulting, a consulting firm with a focus on global telecom. “If you have an old phone, it may not run so well on all spectrum bands,” he told TechNewsWorld. “It certainly won’t work as well as a new phone with a newer chipset.” The dissatisfaction can also be due to a technology misunderstanding, added Opn Comm’s Cohen. “People will have a phone for four or five years and not understand their phone might not have been built for 5G,” he explained. “People expect their LTE phones to automatically go to the next generation,” he continued. “That’s not necessarily the case. Their phone might not be 5G compatible, just like some phones still are not eSIM compatible.” ISPs See Modest Satisfaction Improvements On the plus side, the study found that satisfaction with ISPs, including fiber and non-fiber services, ticked up a point to 72. Satisfaction with fiber declined by one point, to 75, the study noted, while non-fiber jumped three points, to 70. The improved satisfaction rating can be attributed to new investments by the carriers, said Creative Strategies’ Bajarin. “They are gaining new technologies that boost their signal, including some redundancy technologies to make their lines more stable,” he explained. The study noted that AT&T Fiber is leading the fiber segment in satisfaction, scoring a 78 on the index despite a three-point drop. Hot on the heels of AT&T are Google Fiber and Verizon FiOS, at 76, and Xfinity Fiber, at 75. A big gainer in the fiber segment was Optimum, which jumped eight points to 71. The ACSI researchers explained that Optimum’s satisfaction burst was driven primarily by its efforts to add value by strengthening the quality of its customer service. The remaining group of smaller ISPs didn’t fare as well. They dropped nine points to 70. The study noted that “all elements of the fiber customer experience have worsened over the past year, with notable decreases in measures relating to the quality of internet service.” In the non-fiber segment, T-Mobile gained three points to tie leader AT&T at 78. According to the study, T-Mobile has been successful in improving the consistency of its non-fiber service while adding value through improved customer service and plan options. Not far behind the leaders is Verizon, which saw its satisfaction score jump four points to 77. Kinetic by Windstream was a big gainer in the non-fiber segment. It surged 11 points to 62. “By making significant improvements in practical service metrics, Windstream drives customer perceptions of the value of its Kinetic service higher,” the study explained. Wireless Service Satisfaction Slips Declining satisfaction afflicted the wireless phone service industry, according to the ACSI. Overall, the industry dropped a point to 75. Its segments also saw satisfaction declines: value mobile virtual network operatorsslid three points to 78; mobile network operatorsfell one point to 75; and full-service MVNOs slipped three points to 74. Individual MNO players in the market experienced similar declines, with T-Mobile dropping one point to 76, AT&T falling five points to 74, and UScellular losing three points to 72. Verizon was the only gainer in the top four, with a one-point increase to 75. The ACSI researchers explained that in addition to measuring satisfaction with operators, the study measures satisfaction with call quality and network capability. Over the last year, AT&T suffered the largest decrease in both, dropping six points to 77 for call quality and eight points to 76 for network capability. A new feature of this year’s telecommunication and cell phone report is the addition of smartwatches. The study found that Samsung, with a score of 83, edged Apple Watch, which scored 80 in satisfaction. Fitbit finished third with a score of 72. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Smartphones #cell #phone #satisfaction #tumbles #10yearWWW.TECHNEWSWORLD.COMCell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI SurveyCell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey By John P. Mello Jr. May 21, 2025 5:00 AM PT ADVERTISEMENT Proven Tactics to Scale SMB Software Companies in Competitive Markets Gain market share, boost customer acquisition, and improve operational strength. Get the SMB Software Playbook for Expansion & Growth now -- essential reading for growing tech firms. Free Download. What a difference a year makes. Twelve months ago, cell phone satisfaction was riding high in the American Customer Satisfaction Index (ACSI), which surveys U.S. consumers. This year, it has hit an all-time low. The ACSI, a national economic indicator for over 25 years, reported Tuesday that after reaching an all-time high in 2024, cell phone satisfaction fell to its lowest point in a decade, scoring 78 on a scale of 100. “Brands keep racing to add new capabilities, yet customers still judge smartphones by the fundamentals,” Forrest Morgeson, an associate professor of marketing at Michigan State University and Director of Research Emeritus at the ACSI, said in a statement. “Only when companies strengthen the essentials — battery life, call reliability, and ease of use — does innovation truly deliver lasting satisfaction,” he continued. “I totally agree,” added Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, Calif. “Battery life is the number one issue we see in our smartphone surveys,” he told TechNewsWorld. “And call reliability is always a concern because dropped calls or disconnects during social media sessions are frustrating.” People are still getting excited about new features, but they still want greater battery life and their phones to be easier to use than before, countered Bryan Cohen, CEO of Opn Communication, a telecommunications agency based in Sheridan, Wyo. “Take my father. He’s 72 years old, and he wanted an iPhone 16,” he told TechNewsWorld. “I finally went out and got it for him. He got really excited about AI, but then he gets frustrated with it because it’s not easy to use, and he gets mad at the phone.” Phone Makers Take a Hit Dissatisfaction with cell phones affected manufacturers’ ratings, too, according to the ACSI study, which was based on 27,494 completed surveys. Both Apple’s and Samsung’s ratings slipped a point to 81, although Samsung had a slight edge over Apple in the 5G phone category. Both, however, had significant leads in satisfaction compared to their nearest rivals, Google and Motorola, which slid three points to 75. The ACSI researchers also found a widening gap in satisfaction between owners of 5G and non-5G phones. Satisfaction with 5G phones fell two points but still posted a respectable score of 80. Meanwhile, satisfaction with phones using legacy technology plummeted seven points to 68. “It’s very important to understand that the mobile networks in the U.S. use different spectrum bands,” explained John Strand of Denmark-based Strand Consulting, a consulting firm with a focus on global telecom. “If you have an old phone, it may not run so well on all spectrum bands,” he told TechNewsWorld. “It certainly won’t work as well as a new phone with a newer chipset.” The dissatisfaction can also be due to a technology misunderstanding, added Opn Comm’s Cohen. “People will have a phone for four or five years and not understand their phone might not have been built for 5G,” he explained. “People expect their LTE phones to automatically go to the next generation,” he continued. “That’s not necessarily the case. Their phone might not be 5G compatible, just like some phones still are not eSIM compatible.” ISPs See Modest Satisfaction Improvements On the plus side, the study found that satisfaction with ISPs, including fiber and non-fiber services, ticked up a point to 72. Satisfaction with fiber declined by one point, to 75, the study noted, while non-fiber jumped three points, to 70. The improved satisfaction rating can be attributed to new investments by the carriers, said Creative Strategies’ Bajarin. “They are gaining new technologies that boost their signal, including some redundancy technologies to make their lines more stable,” he explained. The study noted that AT&T Fiber is leading the fiber segment in satisfaction, scoring a 78 on the index despite a three-point drop. Hot on the heels of AT&T are Google Fiber and Verizon FiOS, at 76, and Xfinity Fiber, at 75. A big gainer in the fiber segment was Optimum, which jumped eight points to 71. The ACSI researchers explained that Optimum’s satisfaction burst was driven primarily by its efforts to add value by strengthening the quality of its customer service. The remaining group of smaller ISPs didn’t fare as well. They dropped nine points to 70. The study noted that “all elements of the fiber customer experience have worsened over the past year, with notable decreases in measures relating to the quality of internet service.” In the non-fiber segment, T-Mobile gained three points to tie leader AT&T at 78. According to the study, T-Mobile has been successful in improving the consistency of its non-fiber service while adding value through improved customer service and plan options. Not far behind the leaders is Verizon, which saw its satisfaction score jump four points to 77. Kinetic by Windstream was a big gainer in the non-fiber segment. It surged 11 points to 62. “By making significant improvements in practical service metrics, Windstream drives customer perceptions of the value of its Kinetic service higher,” the study explained. Wireless Service Satisfaction Slips Declining satisfaction afflicted the wireless phone service industry, according to the ACSI. Overall, the industry dropped a point to 75. Its segments also saw satisfaction declines: value mobile virtual network operators (MVNOs) slid three points to 78; mobile network operators (MNOs) fell one point to 75; and full-service MVNOs slipped three points to 74. Individual MNO players in the market experienced similar declines, with T-Mobile dropping one point to 76, AT&T falling five points to 74, and UScellular losing three points to 72. Verizon was the only gainer in the top four, with a one-point increase to 75. The ACSI researchers explained that in addition to measuring satisfaction with operators, the study measures satisfaction with call quality and network capability. Over the last year, AT&T suffered the largest decrease in both, dropping six points to 77 for call quality and eight points to 76 for network capability. A new feature of this year’s telecommunication and cell phone report is the addition of smartwatches. The study found that Samsung, with a score of 83, edged Apple Watch, which scored 80 in satisfaction. Fitbit finished third with a score of 72. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Smartphones0 Comments 0 Shares -
Democratic AI Revolution: Power to the People and Code to the Masses
In my town of Bend, Oregon, where the spirit of independence and community thrives, the concept of “Democratic AI” resonates in a uniquely powerful way. In a world increasingly shaped by algorithms and artificial intelligence, the notion of democratizing its creation, access, and governance offers a powerful counterpoint to the centralized control often associated with Big Tech.
But what exactly is Democratic AI, and why might it be the best, perhaps even the only truly sustainable path forward for this transformative technology? Buckle up, because we’re about to dive into the digital town hall of the future.
Then, we’ll close with my Product of the Week: Slate’s new pickup truck, backed by Jeff Bezos, which could transform the EV market.
What Democratic AI Really Means
At its core, Democratic AI is a philosophy and set of practices aimed at distributing the power of AI more broadly. It encompasses several key principles:
Openness and Transparency: The underlying code, data, and models are often open-source or readily accessible for scrutiny and modification. Think of it as the difference between a proprietary black box and a transparent, well-documented library.
Decentralization of Development: Instead of being solely the domain of large corporations with vast resources, Democratic AI encourages contributions from a diverse range of individuals, researchers, smaller organizations, and even governments. It’s the digital equivalent of a community barn-raising.
Participatory Governance: The ethical guidelines, development priorities, and deployment strategies are shaped through broader stakeholder involvement rather than top-down mandates. Imagine citizens having a say in how AI is used in their communities.
Accessibility and Affordability: The tools and resources needed to develop and utilize AI are made as widely available and affordable as possible, breaking down barriers to entry. It’s about leveling the playing field so that innovation isn’t limited by deep pockets.
Data Sovereignty and Privacy: Individuals and communities retain greater control over their data, and privacy is prioritized when developing and deploying AI systems. It’s about ensuring AI serves people, not the other way around.
Democratic AI May Be the Best Path
So, why is this open and collaborative approach potentially superior to more traditional, often proprietary, AI models? Here are several advantages it brings to the table:
Faster and More Diverse Innovation: When you open the floodgates to contributions from a global community, the pace of innovation explodes. Diverse perspectives and skill sets lead to more creative solutions and the exploration of a wider range of ideas, outpacing what any single organization could accomplish. It’s like having a thousand brilliant minds tackling a problem instead of just a handful.
Increased Trust and Accountability: Transparency in code and data allows for greater scrutiny, making it easier to identify and address biases, errors, and potential security vulnerabilities. When the workings of an AI are open for all to see, there’s a greater sense of trust and accountability. It’s harder to hide digital shenanigans in broad daylight.
Reduced Vendor Lock-In and Monopoly Risk: By promoting open standards and interoperability, Democratic AI reduces reliance on proprietary platforms, fostering a more competitive landscape and mitigating the risks associated with the dominance of a few powerful AI providers. It’s about avoiding digital monopolies where a few companies control core AI resources.
Alignment with Public Good: With broader participation in governance and ethical considerations, Democratic AI is more likely to be aligned with the public good and societal values rather than solely driven by corporate profits or narrow interests. It’s about building AI that serves humanity, not just shareholders.
Empowerment and Skill Development: Democratizing AI empowers individuals and smaller organizations to become creators and innovators, fostering a broader understanding of the technology and driving the development of local expertise. It’s about turning passive consumers into active participants in the AI revolution.
Governments and Companies Advancing Democratic AI
While the concept is still evolving, several governments and companies are dipping their toes, or even diving headfirst, into the waters of Democratic AI:
The European Union: With its emphasis on digital sovereignty and open-source initiatives, the EU actively promotes a more democratic and human-centric approach to AI development and regulation.
Various Open-Source AI Initiatives: Projects like Hugging Face, with its open platform for models and datasets, and initiatives around open data for AI training, embody the spirit of Democratic AI.
Decentralized AI Platforms: Emerging projects are exploring blockchain and other decentralized technologies to create more open and community-governed AI infrastructure.
Government-Backed Open AI Research: Some governments are supporting open research efforts to promote collaboration and transparency in AI development. For example, Canada funds its CIFAR AI Chairs program, and the U.K. advances similar goals through the Alan Turing Institute.
Benefits: A Brighter Algorithmic Future
Aggressively pursuing Democratic AI could deliver transformative results:
More Ethical and Fair AI Systems: Open scrutiny and diverse participation can help mitigate biases embedded in data and algorithms, leading to fairer and more equitable AI outcomes.
AI Tailored to Diverse Needs: A decentralized and collaborative approach can foster the development of AI solutions that address the specific needs and contexts of diverse communities and cultures.
Greater Public Trust in AI: Transparency and participatory governance can build greater public trust in AI systems, fostering wider adoption and acceptance.
Accelerated Solutions to Global Challenges: By harnessing the collective intelligence of a global community, Democratic AI can accelerate the development of solutions to pressing global challenges, from climate change to health care.
Where Democratic AI Stands Today
The concept of Democratic AI is still in its relatively early stages. While the principles of open source have a long history in software development, applying them comprehensively to the complex world of AI — including data, models, and governance — is a more recent endeavor.
We are likely in the “seed” or early “sapling” phase of this movement. While there are promising initiatives and growing awareness, widespread adoption and the establishment of robust Democratic AI ecosystems will take time, research, and concerted effort from individuals, organizations, and governments.
Wrapping Up: A People-Led AI Future
Democratic AI offers a compelling vision for the future of artificial intelligence, one where power is distributed, innovation is accelerated through collaboration, and the technology serves humanity’s broader interests. While the path to realizing this vision is still unfolding, the principles of openness, transparency, and participatory governance hold immense promise.
As we navigate the transformative power of AI, embracing a democratic approach might not just be the best way forward; it might be the only way to ensure that this powerful technology truly benefits all of us, here in Bend and across the interconnected world. The seeds of a people’s AI are being planted, and it’s up to us to cultivate a future where everyone shares its fruits.
Slate Electric Pickup – Bezos-Backed Bargain With a Twist
The buzz around the Slate electric pickup truck has been palpable, and after its recent unveiling, it’s clear why.
This no-nonsense EV, with ties to Amazon founder Jeff Bezos through his investment in the parent company Re:Build Manufacturing, is making waves with its incredibly aggressive starting price of just In a market where electric trucks often flirt with six-figure sums, Slate is aiming for the heart of the value-conscious buyer.
What makes the Slate EV truly intriguing is its innovative modular construction. Reportedly, the pickup can be converted into a compact SUV with a relatively inexpensive kit. This transformative capability addresses the reality of how many pickup owners actually use their vehicles.
For the vast majority, the truck bed often sits empty or carries lighter loads, while the need for passenger space and enclosed cargo for family and everyday life is more frequent. Slate’s ability to morph into an SUV offers unparalleled versatility, essentially providing two vehicles in one at an exceptionally low total cost of ownership.
Slate’s price point makes it especially appealing to buyers focused on practicality and value. At it’s positioned to be one of the most affordable EVs on the market, let alone an electric truck. This price point opens electric mobility to buyers who were previously priced out of the EV market.
Considering that most pickup truck owners primarily use their vehicles for commuting, errands, and occasional light hauling, Slate’s EV core functionality likely meets their needs without the excess capacity and exorbitant cost of larger, more powerful trucks.
Adding to its affordability, Slate’s truck is reportedly still eligible for the U.S. federal tax credit for electric vehicles, currently as much as For those who qualify, this effectively brings the starting price down to a mere making it an absolute steal and a compelling alternative to many gasoline-powered used vehicles.
While specific performance figures are still emerging, early reports suggest a respectable range suitable for daily driving, a powertrain adequate for typical truck duties, and the added weight of its modular components. Its focus on affordability likely means it won’t boast the blistering acceleration of high-end EVs, but it promises a practical and efficient driving experience.
The Slate electric pickup, with its Bezos connection, groundbreaking modularity, and incredibly low price, could very well be the value king of the electric truck revolution. The clear value of this Slate pickup — and the fact that it’s likely giving Elon Musk nightmares — makes it my Product of the Week.
Credit: The Slate pickup truck images are courtesy of Slate.
#democratic #revolution #power #people #codeDemocratic AI Revolution: Power to the People and Code to the MassesIn my town of Bend, Oregon, where the spirit of independence and community thrives, the concept of “Democratic AI” resonates in a uniquely powerful way. In a world increasingly shaped by algorithms and artificial intelligence, the notion of democratizing its creation, access, and governance offers a powerful counterpoint to the centralized control often associated with Big Tech. But what exactly is Democratic AI, and why might it be the best, perhaps even the only truly sustainable path forward for this transformative technology? Buckle up, because we’re about to dive into the digital town hall of the future. Then, we’ll close with my Product of the Week: Slate’s new pickup truck, backed by Jeff Bezos, which could transform the EV market. What Democratic AI Really Means At its core, Democratic AI is a philosophy and set of practices aimed at distributing the power of AI more broadly. It encompasses several key principles: Openness and Transparency: The underlying code, data, and models are often open-source or readily accessible for scrutiny and modification. Think of it as the difference between a proprietary black box and a transparent, well-documented library. Decentralization of Development: Instead of being solely the domain of large corporations with vast resources, Democratic AI encourages contributions from a diverse range of individuals, researchers, smaller organizations, and even governments. It’s the digital equivalent of a community barn-raising. Participatory Governance: The ethical guidelines, development priorities, and deployment strategies are shaped through broader stakeholder involvement rather than top-down mandates. Imagine citizens having a say in how AI is used in their communities. Accessibility and Affordability: The tools and resources needed to develop and utilize AI are made as widely available and affordable as possible, breaking down barriers to entry. It’s about leveling the playing field so that innovation isn’t limited by deep pockets. Data Sovereignty and Privacy: Individuals and communities retain greater control over their data, and privacy is prioritized when developing and deploying AI systems. It’s about ensuring AI serves people, not the other way around. Democratic AI May Be the Best Path So, why is this open and collaborative approach potentially superior to more traditional, often proprietary, AI models? Here are several advantages it brings to the table: Faster and More Diverse Innovation: When you open the floodgates to contributions from a global community, the pace of innovation explodes. Diverse perspectives and skill sets lead to more creative solutions and the exploration of a wider range of ideas, outpacing what any single organization could accomplish. It’s like having a thousand brilliant minds tackling a problem instead of just a handful. Increased Trust and Accountability: Transparency in code and data allows for greater scrutiny, making it easier to identify and address biases, errors, and potential security vulnerabilities. When the workings of an AI are open for all to see, there’s a greater sense of trust and accountability. It’s harder to hide digital shenanigans in broad daylight. Reduced Vendor Lock-In and Monopoly Risk: By promoting open standards and interoperability, Democratic AI reduces reliance on proprietary platforms, fostering a more competitive landscape and mitigating the risks associated with the dominance of a few powerful AI providers. It’s about avoiding digital monopolies where a few companies control core AI resources. Alignment with Public Good: With broader participation in governance and ethical considerations, Democratic AI is more likely to be aligned with the public good and societal values rather than solely driven by corporate profits or narrow interests. It’s about building AI that serves humanity, not just shareholders. Empowerment and Skill Development: Democratizing AI empowers individuals and smaller organizations to become creators and innovators, fostering a broader understanding of the technology and driving the development of local expertise. It’s about turning passive consumers into active participants in the AI revolution. Governments and Companies Advancing Democratic AI While the concept is still evolving, several governments and companies are dipping their toes, or even diving headfirst, into the waters of Democratic AI: The European Union: With its emphasis on digital sovereignty and open-source initiatives, the EU actively promotes a more democratic and human-centric approach to AI development and regulation. Various Open-Source AI Initiatives: Projects like Hugging Face, with its open platform for models and datasets, and initiatives around open data for AI training, embody the spirit of Democratic AI. Decentralized AI Platforms: Emerging projects are exploring blockchain and other decentralized technologies to create more open and community-governed AI infrastructure. Government-Backed Open AI Research: Some governments are supporting open research efforts to promote collaboration and transparency in AI development. For example, Canada funds its CIFAR AI Chairs program, and the U.K. advances similar goals through the Alan Turing Institute. Benefits: A Brighter Algorithmic Future Aggressively pursuing Democratic AI could deliver transformative results: More Ethical and Fair AI Systems: Open scrutiny and diverse participation can help mitigate biases embedded in data and algorithms, leading to fairer and more equitable AI outcomes. AI Tailored to Diverse Needs: A decentralized and collaborative approach can foster the development of AI solutions that address the specific needs and contexts of diverse communities and cultures. Greater Public Trust in AI: Transparency and participatory governance can build greater public trust in AI systems, fostering wider adoption and acceptance. Accelerated Solutions to Global Challenges: By harnessing the collective intelligence of a global community, Democratic AI can accelerate the development of solutions to pressing global challenges, from climate change to health care. Where Democratic AI Stands Today The concept of Democratic AI is still in its relatively early stages. While the principles of open source have a long history in software development, applying them comprehensively to the complex world of AI — including data, models, and governance — is a more recent endeavor. We are likely in the “seed” or early “sapling” phase of this movement. While there are promising initiatives and growing awareness, widespread adoption and the establishment of robust Democratic AI ecosystems will take time, research, and concerted effort from individuals, organizations, and governments. Wrapping Up: A People-Led AI Future Democratic AI offers a compelling vision for the future of artificial intelligence, one where power is distributed, innovation is accelerated through collaboration, and the technology serves humanity’s broader interests. While the path to realizing this vision is still unfolding, the principles of openness, transparency, and participatory governance hold immense promise. As we navigate the transformative power of AI, embracing a democratic approach might not just be the best way forward; it might be the only way to ensure that this powerful technology truly benefits all of us, here in Bend and across the interconnected world. The seeds of a people’s AI are being planted, and it’s up to us to cultivate a future where everyone shares its fruits. Slate Electric Pickup – Bezos-Backed Bargain With a Twist The buzz around the Slate electric pickup truck has been palpable, and after its recent unveiling, it’s clear why. This no-nonsense EV, with ties to Amazon founder Jeff Bezos through his investment in the parent company Re:Build Manufacturing, is making waves with its incredibly aggressive starting price of just In a market where electric trucks often flirt with six-figure sums, Slate is aiming for the heart of the value-conscious buyer. What makes the Slate EV truly intriguing is its innovative modular construction. Reportedly, the pickup can be converted into a compact SUV with a relatively inexpensive kit. This transformative capability addresses the reality of how many pickup owners actually use their vehicles. For the vast majority, the truck bed often sits empty or carries lighter loads, while the need for passenger space and enclosed cargo for family and everyday life is more frequent. Slate’s ability to morph into an SUV offers unparalleled versatility, essentially providing two vehicles in one at an exceptionally low total cost of ownership. Slate’s price point makes it especially appealing to buyers focused on practicality and value. At it’s positioned to be one of the most affordable EVs on the market, let alone an electric truck. This price point opens electric mobility to buyers who were previously priced out of the EV market. Considering that most pickup truck owners primarily use their vehicles for commuting, errands, and occasional light hauling, Slate’s EV core functionality likely meets their needs without the excess capacity and exorbitant cost of larger, more powerful trucks. Adding to its affordability, Slate’s truck is reportedly still eligible for the U.S. federal tax credit for electric vehicles, currently as much as For those who qualify, this effectively brings the starting price down to a mere making it an absolute steal and a compelling alternative to many gasoline-powered used vehicles. While specific performance figures are still emerging, early reports suggest a respectable range suitable for daily driving, a powertrain adequate for typical truck duties, and the added weight of its modular components. Its focus on affordability likely means it won’t boast the blistering acceleration of high-end EVs, but it promises a practical and efficient driving experience. The Slate electric pickup, with its Bezos connection, groundbreaking modularity, and incredibly low price, could very well be the value king of the electric truck revolution. The clear value of this Slate pickup — and the fact that it’s likely giving Elon Musk nightmares — makes it my Product of the Week. Credit: The Slate pickup truck images are courtesy of Slate. #democratic #revolution #power #people #codeWWW.TECHNEWSWORLD.COMDemocratic AI Revolution: Power to the People and Code to the MassesIn my town of Bend, Oregon, where the spirit of independence and community thrives, the concept of “Democratic AI” resonates in a uniquely powerful way. In a world increasingly shaped by algorithms and artificial intelligence (AI), the notion of democratizing its creation, access, and governance offers a powerful counterpoint to the centralized control often associated with Big Tech. But what exactly is Democratic AI, and why might it be the best, perhaps even the only truly sustainable path forward for this transformative technology? Buckle up, because we’re about to dive into the digital town hall of the future. Then, we’ll close with my Product of the Week: Slate’s new pickup truck, backed by Jeff Bezos, which could transform the EV market. What Democratic AI Really Means At its core, Democratic AI is a philosophy and set of practices aimed at distributing the power of AI more broadly. It encompasses several key principles: Openness and Transparency: The underlying code, data, and models are often open-source or readily accessible for scrutiny and modification. Think of it as the difference between a proprietary black box and a transparent, well-documented library. Decentralization of Development: Instead of being solely the domain of large corporations with vast resources, Democratic AI encourages contributions from a diverse range of individuals, researchers, smaller organizations, and even governments. It’s the digital equivalent of a community barn-raising. Participatory Governance: The ethical guidelines, development priorities, and deployment strategies are shaped through broader stakeholder involvement rather than top-down mandates. Imagine citizens having a say in how AI is used in their communities. Accessibility and Affordability: The tools and resources needed to develop and utilize AI are made as widely available and affordable as possible, breaking down barriers to entry. It’s about leveling the playing field so that innovation isn’t limited by deep pockets. Data Sovereignty and Privacy: Individuals and communities retain greater control over their data, and privacy is prioritized when developing and deploying AI systems. It’s about ensuring AI serves people, not the other way around. Democratic AI May Be the Best Path So, why is this open and collaborative approach potentially superior to more traditional, often proprietary, AI models? Here are several advantages it brings to the table: Faster and More Diverse Innovation: When you open the floodgates to contributions from a global community, the pace of innovation explodes. Diverse perspectives and skill sets lead to more creative solutions and the exploration of a wider range of ideas, outpacing what any single organization could accomplish. It’s like having a thousand brilliant minds tackling a problem instead of just a handful. Increased Trust and Accountability: Transparency in code and data allows for greater scrutiny, making it easier to identify and address biases, errors, and potential security vulnerabilities. When the workings of an AI are open for all to see, there’s a greater sense of trust and accountability. It’s harder to hide digital shenanigans in broad daylight. Reduced Vendor Lock-In and Monopoly Risk: By promoting open standards and interoperability, Democratic AI reduces reliance on proprietary platforms, fostering a more competitive landscape and mitigating the risks associated with the dominance of a few powerful AI providers. It’s about avoiding digital monopolies where a few companies control core AI resources. Alignment with Public Good: With broader participation in governance and ethical considerations, Democratic AI is more likely to be aligned with the public good and societal values rather than solely driven by corporate profits or narrow interests. It’s about building AI that serves humanity, not just shareholders. Empowerment and Skill Development: Democratizing AI empowers individuals and smaller organizations to become creators and innovators, fostering a broader understanding of the technology and driving the development of local expertise. It’s about turning passive consumers into active participants in the AI revolution. Governments and Companies Advancing Democratic AI While the concept is still evolving, several governments and companies are dipping their toes, or even diving headfirst, into the waters of Democratic AI: The European Union: With its emphasis on digital sovereignty and open-source initiatives, the EU actively promotes a more democratic and human-centric approach to AI development and regulation. Various Open-Source AI Initiatives: Projects like Hugging Face, with its open platform for models and datasets, and initiatives around open data for AI training, embody the spirit of Democratic AI. Decentralized AI Platforms: Emerging projects are exploring blockchain and other decentralized technologies to create more open and community-governed AI infrastructure. Government-Backed Open AI Research: Some governments are supporting open research efforts to promote collaboration and transparency in AI development. For example, Canada funds its CIFAR AI Chairs program, and the U.K. advances similar goals through the Alan Turing Institute. Benefits: A Brighter Algorithmic Future Aggressively pursuing Democratic AI could deliver transformative results: More Ethical and Fair AI Systems: Open scrutiny and diverse participation can help mitigate biases embedded in data and algorithms, leading to fairer and more equitable AI outcomes. AI Tailored to Diverse Needs: A decentralized and collaborative approach can foster the development of AI solutions that address the specific needs and contexts of diverse communities and cultures. Greater Public Trust in AI: Transparency and participatory governance can build greater public trust in AI systems, fostering wider adoption and acceptance. Accelerated Solutions to Global Challenges: By harnessing the collective intelligence of a global community, Democratic AI can accelerate the development of solutions to pressing global challenges, from climate change to health care. Where Democratic AI Stands Today The concept of Democratic AI is still in its relatively early stages. While the principles of open source have a long history in software development, applying them comprehensively to the complex world of AI — including data, models, and governance — is a more recent endeavor. We are likely in the “seed” or early “sapling” phase of this movement. While there are promising initiatives and growing awareness, widespread adoption and the establishment of robust Democratic AI ecosystems will take time, research, and concerted effort from individuals, organizations, and governments. Wrapping Up: A People-Led AI Future Democratic AI offers a compelling vision for the future of artificial intelligence, one where power is distributed, innovation is accelerated through collaboration, and the technology serves humanity’s broader interests. While the path to realizing this vision is still unfolding, the principles of openness, transparency, and participatory governance hold immense promise. As we navigate the transformative power of AI, embracing a democratic approach might not just be the best way forward; it might be the only way to ensure that this powerful technology truly benefits all of us, here in Bend and across the interconnected world. The seeds of a people’s AI are being planted, and it’s up to us to cultivate a future where everyone shares its fruits. Slate Electric Pickup – Bezos-Backed Bargain With a Twist The buzz around the Slate electric pickup truck has been palpable, and after its recent unveiling, it’s clear why. This no-nonsense EV, with ties to Amazon founder Jeff Bezos through his investment in the parent company Re:Build Manufacturing, is making waves with its incredibly aggressive starting price of just $25,000. In a market where electric trucks often flirt with six-figure sums, Slate is aiming for the heart of the value-conscious buyer. What makes the Slate EV truly intriguing is its innovative modular construction. Reportedly, the pickup can be converted into a compact SUV with a relatively inexpensive kit. This transformative capability addresses the reality of how many pickup owners actually use their vehicles. For the vast majority, the truck bed often sits empty or carries lighter loads, while the need for passenger space and enclosed cargo for family and everyday life is more frequent. Slate’s ability to morph into an SUV offers unparalleled versatility, essentially providing two vehicles in one at an exceptionally low total cost of ownership. Slate’s price point makes it especially appealing to buyers focused on practicality and value. At $25,000, it’s positioned to be one of the most affordable EVs on the market, let alone an electric truck. This price point opens electric mobility to buyers who were previously priced out of the EV market. Considering that most pickup truck owners primarily use their vehicles for commuting, errands, and occasional light hauling, Slate’s EV core functionality likely meets their needs without the excess capacity and exorbitant cost of larger, more powerful trucks. Adding to its affordability, Slate’s truck is reportedly still eligible for the U.S. federal tax credit for electric vehicles, currently as much as $7,500. For those who qualify (check with your tax advisor), this effectively brings the starting price down to a mere $17,500, making it an absolute steal and a compelling alternative to many gasoline-powered used vehicles. While specific performance figures are still emerging, early reports suggest a respectable range suitable for daily driving, a powertrain adequate for typical truck duties, and the added weight of its modular components. Its focus on affordability likely means it won’t boast the blistering acceleration of high-end EVs, but it promises a practical and efficient driving experience. The Slate electric pickup, with its Bezos connection, groundbreaking modularity, and incredibly low price, could very well be the value king of the electric truck revolution. The clear value of this Slate pickup — and the fact that it’s likely giving Elon Musk nightmares — makes it my Product of the Week. Credit: The Slate pickup truck images are courtesy of Slate.34 Comments 0 Shares -
Apple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire
Apple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire
By John P. Mello Jr.
May 14, 2025 5:00 AM PT
ADVERTISEMENT
Rubrik Foward 2025: The Future of Cyber Resilience is here
When an attacker comes for your business, will you be ready? Chart your path to cyber resilience and keep your business running. June 4 | Virtual Event | Register Now
Among a raft of upcoming accessibility tools revealed Tuesday, Apple announced a new protocol for brain-to-computer interfaceswithin its Switch Control feature. The protocol allows iOS, iPadOS, and visionOS devices to support an emerging technology that enables users to control their digital hardware without physical movement.
One of the first companies to take advantage of the new protocol will be New York-based Synchron. “This marks a major milestone in accessibility and neurotechnology, where users implanted with Synchron’s BCI can control iPhone, iPad, and Apple Vision Pro directly with their thoughts without the need for physical movement or voice commands,” the company said in a statement.
It added that Synchron’s BCI system will seamlessly integrate with Apple’s built-in accessibility features, including Switch Control, giving users an intuitive way to use their devices and laying the foundation for a new generation of cognitive input technologies.
“This marks a defining moment for human-device interaction,” Synchron CEO and Co-Founder Tom Oxley said in a statement. “BCI is more than an accessibility tool, it’s a next-generation interface layer.”
“Apple is helping to pioneer a new interface paradigm, where brain signals are formally recognized alongside touch, voice, and typing,” he continued. “With BCI recognized as a native input for Apple devices, there are new possibilities for people living with paralysis and beyond.”
BCI Validation
Tetiana Aleksandrova, CEO of Subsense, a biotechnology company in Covina, Calif., specializing in non-surgical bidirectional brain-computer interfaces, maintained Apple’s announcement marks a powerful signal showing the evolution of BCI. “I see it as Apple throwing open the gates — a single-stroke move that invites clinically-validated BCIs like Synchron’s Stentrode to plug straight into a billion-device ecosystem,” she told TechNewsWorld.
“For patients, it means ‘mind-to-message’ control without middleware,” she said. “For the BCI industry, it’s a public stamp that neural input is ready for prime time — and yes, that’s a thrilling milestone for all of us building the next generation of non-surgical systems. It’s a shift, moving BCI from a nascent technology to more mainstream applications.”
Aleksandrova maintained that BCI fits nicely into Apple’s overall accessibility strategy.
“Apple’s playbook is to solve an extreme edge case, polish the UX until it’s invisible, then let the rest of the world adopt it,” she explained. “VoiceOver paved the way for Siri. Switch Control turned into Face Gestures. BCI support is the natural next rung on that ladder. Accessibility isn’t a side quest for Apple — it’s the R and D lab that future-proofs their core UI.”
“Apple devices put unlimited information at users’ fingertips,” she added, “but for people with disabilities from TBI or ALS, full access can be out of reach. BCI technology helps bridge that gap, giving them a way to control and interact with their devices using only their brain activity.”
Analysts See BCI as Long-Term Technology
Apple’s embrace of BCI is significant, but its impact still lies in the future, noted Will Kerwin, technology equity analyst with Morningstar Research Services in Chicago. “While a particularly cool announcement, we think this type of feature is a long way away from full commercialization and not material to investors in Apple at this point,” he told TechNewsWorld.
Kerwin pointed out that Synchron’s Stentrode BCI currently only has a sample size of 10 people.
“Long-term, yes, this technology could have huge implications for how humans interact with technology, and we see it as an adjacency to AI, where generative AI could help improve the interface and ability for humans to communicate via the implant,” he said. “But again, we see this as an extremely long-term journey in its nascent days.”
According to the Wall Street Journal, FDA approval of Synchron’s Stenrode technology is years away. The procedure involves implanting a stent-mounted electrode array into a blood vessel in the brain, so there’s no need for open brain surgery.
“Some companies in the BCI space are focused on cortical control of prosthetics, others on cognitive enhancement or memory restoration,” Synchron spokesperson Kimberly Ha told TechNewsWorld. “What sets us apart is our focus on scalability and safety. By implanting via the blood vessels, we avoid open brain surgery, making our approach more feasible for potentially broader medical use.”
Solving the BCI Scalability Problem
Ha acknowledged that there are significant challenges to the broad adoption of BCI. “Scalability is one of the biggest,” she said.
“Historically, many BCI systems have required open brain surgery, which presents serious risks and limits to who can access the technology,” she explained. “It’s simply not scalable for widespread clinical or consumer use.”
“Synchron takes a fundamentally different approach,” she continued. “Our Stentrode device is implanted via the blood vessels, similar to a heart stent, avoiding the need to open the skull or directly penetrate brain tissue. This makes the procedure far less invasive, more accessible to patients, and better suited to real-world clinical deployment.”
There are also challenges to developing the BCI apps themselves. “The biggest challenge in developing BCI applications is the trade-off between signal quality and accessibility,” Aleksandrova explained. “While a directly implanted BCI offers strong brain signals, surgery is risky. With non-invasive systems, the resolution is poor.”
Her company, Subsense, is trying to offer the best of both worlds through the use of nanoparticles, which can provide bidirectional communication by crossing the blood-brain barrier to interact with neurons and transmit signals.
Thought-Driven Interfaces for New Use Cases
Ha noted that in addition to medical applications, BCI could be used for hands-free device control across all types of digital platforms, neuroadaptive gaming, or immersive XR experiences.
“BCI opens doors to applications in mental wellness, communication tools, and cognitive enhancement,” Aleksandrova added.
“You’ll fire off texts, browse AR screens, or write code just by thinking,” she said. “You’ll slip seamlessly into a drone or handle a surgical robot as though it were your own hand, and nudge your smart home with a silent impulse that dims the lights when focus peaks.”
“Entertainment will read the room inside your head — dialing a game’s difficulty or a film’s plot to match your mood — while always-on neural metrics warn of fatigue, migraines, or anxiety hours before you notice and even surface names or ideas when your memory stalls,” she predicted. “Your unique brainwave ‘fingerprint’ will replace passwords, and researchers are already sketching ways to preserve those patterns so our minds can outlast failing bodies.”
“I’m genuinely proud of Synchron and Apple for opening this door,” she said.
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
More by John P. Mello Jr.
view all
More in Science
#apple #adds #braintocomputer #protocol #itsApple Adds Brain-to-Computer Protocol to Its Accessibility RepertoireApple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire By John P. Mello Jr. May 14, 2025 5:00 AM PT ADVERTISEMENT Rubrik Foward 2025: The Future of Cyber Resilience is here When an attacker comes for your business, will you be ready? Chart your path to cyber resilience and keep your business running. June 4 | Virtual Event | Register Now Among a raft of upcoming accessibility tools revealed Tuesday, Apple announced a new protocol for brain-to-computer interfaceswithin its Switch Control feature. The protocol allows iOS, iPadOS, and visionOS devices to support an emerging technology that enables users to control their digital hardware without physical movement. One of the first companies to take advantage of the new protocol will be New York-based Synchron. “This marks a major milestone in accessibility and neurotechnology, where users implanted with Synchron’s BCI can control iPhone, iPad, and Apple Vision Pro directly with their thoughts without the need for physical movement or voice commands,” the company said in a statement. It added that Synchron’s BCI system will seamlessly integrate with Apple’s built-in accessibility features, including Switch Control, giving users an intuitive way to use their devices and laying the foundation for a new generation of cognitive input technologies. “This marks a defining moment for human-device interaction,” Synchron CEO and Co-Founder Tom Oxley said in a statement. “BCI is more than an accessibility tool, it’s a next-generation interface layer.” “Apple is helping to pioneer a new interface paradigm, where brain signals are formally recognized alongside touch, voice, and typing,” he continued. “With BCI recognized as a native input for Apple devices, there are new possibilities for people living with paralysis and beyond.” BCI Validation Tetiana Aleksandrova, CEO of Subsense, a biotechnology company in Covina, Calif., specializing in non-surgical bidirectional brain-computer interfaces, maintained Apple’s announcement marks a powerful signal showing the evolution of BCI. “I see it as Apple throwing open the gates — a single-stroke move that invites clinically-validated BCIs like Synchron’s Stentrode to plug straight into a billion-device ecosystem,” she told TechNewsWorld. “For patients, it means ‘mind-to-message’ control without middleware,” she said. “For the BCI industry, it’s a public stamp that neural input is ready for prime time — and yes, that’s a thrilling milestone for all of us building the next generation of non-surgical systems. It’s a shift, moving BCI from a nascent technology to more mainstream applications.” Aleksandrova maintained that BCI fits nicely into Apple’s overall accessibility strategy. “Apple’s playbook is to solve an extreme edge case, polish the UX until it’s invisible, then let the rest of the world adopt it,” she explained. “VoiceOver paved the way for Siri. Switch Control turned into Face Gestures. BCI support is the natural next rung on that ladder. Accessibility isn’t a side quest for Apple — it’s the R and D lab that future-proofs their core UI.” “Apple devices put unlimited information at users’ fingertips,” she added, “but for people with disabilities from TBI or ALS, full access can be out of reach. BCI technology helps bridge that gap, giving them a way to control and interact with their devices using only their brain activity.” Analysts See BCI as Long-Term Technology Apple’s embrace of BCI is significant, but its impact still lies in the future, noted Will Kerwin, technology equity analyst with Morningstar Research Services in Chicago. “While a particularly cool announcement, we think this type of feature is a long way away from full commercialization and not material to investors in Apple at this point,” he told TechNewsWorld. Kerwin pointed out that Synchron’s Stentrode BCI currently only has a sample size of 10 people. “Long-term, yes, this technology could have huge implications for how humans interact with technology, and we see it as an adjacency to AI, where generative AI could help improve the interface and ability for humans to communicate via the implant,” he said. “But again, we see this as an extremely long-term journey in its nascent days.” According to the Wall Street Journal, FDA approval of Synchron’s Stenrode technology is years away. The procedure involves implanting a stent-mounted electrode array into a blood vessel in the brain, so there’s no need for open brain surgery. “Some companies in the BCI space are focused on cortical control of prosthetics, others on cognitive enhancement or memory restoration,” Synchron spokesperson Kimberly Ha told TechNewsWorld. “What sets us apart is our focus on scalability and safety. By implanting via the blood vessels, we avoid open brain surgery, making our approach more feasible for potentially broader medical use.” Solving the BCI Scalability Problem Ha acknowledged that there are significant challenges to the broad adoption of BCI. “Scalability is one of the biggest,” she said. “Historically, many BCI systems have required open brain surgery, which presents serious risks and limits to who can access the technology,” she explained. “It’s simply not scalable for widespread clinical or consumer use.” “Synchron takes a fundamentally different approach,” she continued. “Our Stentrode device is implanted via the blood vessels, similar to a heart stent, avoiding the need to open the skull or directly penetrate brain tissue. This makes the procedure far less invasive, more accessible to patients, and better suited to real-world clinical deployment.” There are also challenges to developing the BCI apps themselves. “The biggest challenge in developing BCI applications is the trade-off between signal quality and accessibility,” Aleksandrova explained. “While a directly implanted BCI offers strong brain signals, surgery is risky. With non-invasive systems, the resolution is poor.” Her company, Subsense, is trying to offer the best of both worlds through the use of nanoparticles, which can provide bidirectional communication by crossing the blood-brain barrier to interact with neurons and transmit signals. Thought-Driven Interfaces for New Use Cases Ha noted that in addition to medical applications, BCI could be used for hands-free device control across all types of digital platforms, neuroadaptive gaming, or immersive XR experiences. “BCI opens doors to applications in mental wellness, communication tools, and cognitive enhancement,” Aleksandrova added. “You’ll fire off texts, browse AR screens, or write code just by thinking,” she said. “You’ll slip seamlessly into a drone or handle a surgical robot as though it were your own hand, and nudge your smart home with a silent impulse that dims the lights when focus peaks.” “Entertainment will read the room inside your head — dialing a game’s difficulty or a film’s plot to match your mood — while always-on neural metrics warn of fatigue, migraines, or anxiety hours before you notice and even surface names or ideas when your memory stalls,” she predicted. “Your unique brainwave ‘fingerprint’ will replace passwords, and researchers are already sketching ways to preserve those patterns so our minds can outlast failing bodies.” “I’m genuinely proud of Synchron and Apple for opening this door,” she said. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. More by John P. Mello Jr. view all More in Science #apple #adds #braintocomputer #protocol #itsWWW.TECHNEWSWORLD.COMApple Adds Brain-to-Computer Protocol to Its Accessibility RepertoireApple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire By John P. Mello Jr. May 14, 2025 5:00 AM PT ADVERTISEMENT Rubrik Foward 2025: The Future of Cyber Resilience is here When an attacker comes for your business, will you be ready? Chart your path to cyber resilience and keep your business running. June 4 | Virtual Event | Register Now Among a raft of upcoming accessibility tools revealed Tuesday, Apple announced a new protocol for brain-to-computer interfaces (BCIs) within its Switch Control feature. The protocol allows iOS, iPadOS, and visionOS devices to support an emerging technology that enables users to control their digital hardware without physical movement. One of the first companies to take advantage of the new protocol will be New York-based Synchron. “This marks a major milestone in accessibility and neurotechnology, where users implanted with Synchron’s BCI can control iPhone, iPad, and Apple Vision Pro directly with their thoughts without the need for physical movement or voice commands,” the company said in a statement. It added that Synchron’s BCI system will seamlessly integrate with Apple’s built-in accessibility features, including Switch Control, giving users an intuitive way to use their devices and laying the foundation for a new generation of cognitive input technologies. “This marks a defining moment for human-device interaction,” Synchron CEO and Co-Founder Tom Oxley said in a statement. “BCI is more than an accessibility tool, it’s a next-generation interface layer.” “Apple is helping to pioneer a new interface paradigm, where brain signals are formally recognized alongside touch, voice, and typing,” he continued. “With BCI recognized as a native input for Apple devices, there are new possibilities for people living with paralysis and beyond.” BCI Validation Tetiana Aleksandrova, CEO of Subsense, a biotechnology company in Covina, Calif., specializing in non-surgical bidirectional brain-computer interfaces, maintained Apple’s announcement marks a powerful signal showing the evolution of BCI. “I see it as Apple throwing open the gates — a single-stroke move that invites clinically-validated BCIs like Synchron’s Stentrode to plug straight into a billion-device ecosystem,” she told TechNewsWorld. “For patients, it means ‘mind-to-message’ control without middleware,” she said. “For the BCI industry, it’s a public stamp that neural input is ready for prime time — and yes, that’s a thrilling milestone for all of us building the next generation of non-surgical systems. It’s a shift, moving BCI from a nascent technology to more mainstream applications.” Aleksandrova maintained that BCI fits nicely into Apple’s overall accessibility strategy. “Apple’s playbook is to solve an extreme edge case, polish the UX until it’s invisible, then let the rest of the world adopt it,” she explained. “VoiceOver paved the way for Siri. Switch Control turned into Face Gestures. BCI support is the natural next rung on that ladder. Accessibility isn’t a side quest for Apple — it’s the R and D lab that future-proofs their core UI.” “Apple devices put unlimited information at users’ fingertips,” she added, “but for people with disabilities from TBI or ALS, full access can be out of reach. BCI technology helps bridge that gap, giving them a way to control and interact with their devices using only their brain activity.” Analysts See BCI as Long-Term Technology Apple’s embrace of BCI is significant, but its impact still lies in the future, noted Will Kerwin, technology equity analyst with Morningstar Research Services in Chicago. “While a particularly cool announcement, we think this type of feature is a long way away from full commercialization and not material to investors in Apple at this point,” he told TechNewsWorld. Kerwin pointed out that Synchron’s Stentrode BCI currently only has a sample size of 10 people. “Long-term, yes, this technology could have huge implications for how humans interact with technology, and we see it as an adjacency to AI, where generative AI could help improve the interface and ability for humans to communicate via the implant,” he said. “But again, we see this as an extremely long-term journey in its nascent days.” According to the Wall Street Journal, FDA approval of Synchron’s Stenrode technology is years away. The procedure involves implanting a stent-mounted electrode array into a blood vessel in the brain, so there’s no need for open brain surgery. “Some companies in the BCI space are focused on cortical control of prosthetics, others on cognitive enhancement or memory restoration,” Synchron spokesperson Kimberly Ha told TechNewsWorld. “What sets us apart is our focus on scalability and safety. By implanting via the blood vessels, we avoid open brain surgery, making our approach more feasible for potentially broader medical use.” Solving the BCI Scalability Problem Ha acknowledged that there are significant challenges to the broad adoption of BCI. “Scalability is one of the biggest,” she said. “Historically, many BCI systems have required open brain surgery, which presents serious risks and limits to who can access the technology,” she explained. “It’s simply not scalable for widespread clinical or consumer use.” “Synchron takes a fundamentally different approach,” she continued. “Our Stentrode device is implanted via the blood vessels, similar to a heart stent, avoiding the need to open the skull or directly penetrate brain tissue. This makes the procedure far less invasive, more accessible to patients, and better suited to real-world clinical deployment.” There are also challenges to developing the BCI apps themselves. “The biggest challenge in developing BCI applications is the trade-off between signal quality and accessibility,” Aleksandrova explained. “While a directly implanted BCI offers strong brain signals, surgery is risky. With non-invasive systems, the resolution is poor.” Her company, Subsense, is trying to offer the best of both worlds through the use of nanoparticles, which can provide bidirectional communication by crossing the blood-brain barrier to interact with neurons and transmit signals. Thought-Driven Interfaces for New Use Cases Ha noted that in addition to medical applications, BCI could be used for hands-free device control across all types of digital platforms, neuroadaptive gaming, or immersive XR experiences. “BCI opens doors to applications in mental wellness, communication tools, and cognitive enhancement,” Aleksandrova added. “You’ll fire off texts, browse AR screens, or write code just by thinking,” she said. “You’ll slip seamlessly into a drone or handle a surgical robot as though it were your own hand, and nudge your smart home with a silent impulse that dims the lights when focus peaks.” “Entertainment will read the room inside your head — dialing a game’s difficulty or a film’s plot to match your mood — while always-on neural metrics warn of fatigue, migraines, or anxiety hours before you notice and even surface names or ideas when your memory stalls,” she predicted. “Your unique brainwave ‘fingerprint’ will replace passwords, and researchers are already sketching ways to preserve those patterns so our minds can outlast failing bodies.” “I’m genuinely proud of Synchron and Apple for opening this door,” she said. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. More by John P. Mello Jr. view all More in Science0 Comments 0 Shares -
AI Is Rewriting the Rules of Brand Management
AI Is Rewriting the Rules of Brand Management
By John P. Mello Jr.
May 13, 2025 5:00 AM PT
ADVERTISEMENT
Build HubSpot Apps, Faster
New developer products preview the future of app building on HubSpot, including deeper extensibility, flexible UI, modern prototyping tools, and more. Learn More.
Although building trust through a carefully crafted brand message is still important, artificial intelligence may be undermining its traditional influence.
“AI isn’t just helping businesses create content or automate tasks; it’s empowering individuals to become instant digital detectives,” Mike Allton, chief storyteller at Agorapulse, a social media management platform for businesses, wrote Monday on LinkedIn.
What that means, he explained, is a company’s entire digital history — reviews, articles, social media sentiment, even employee feedback — is now more transparent and instantly “queryable” than ever before. “The carefully crafted brand message? It’s still important, but AI can now cross-reference it with raw, aggregated public data in seconds,” he noted.
Edwin Miller, CEO of Marchex, a conversation intelligence platform maker headquartered in Seattle, explained that the rise of large language models and real-time data analytics has effectively turned a company’s full digital footprint into a searchable, easy-to-interpret, and evaluative source of truth.
“We’re entering a world where a company’s entire identity, how it treats customers, how it responds to criticism, what employees really think, and how well it delivers on its promises, can be surfaced instantly by AI,” he told TechNewsWorld. “And not just by researchers or journalists, but by consumers, investors, and competitors.”
“This means companies no longer control the brand narrative the way they used to,” he said. “The narrative is now co-authored by customers, employees, and digital observers, with AI acting as a kind of omnipresent interpreter. That changes the playing field for brand management entirely.”
AI Shrinks Trust-Building to Milliseconds
Mark N. Vena, president and principal analyst for SmartTech Research in Las Vegas, argued that brand management is a “huge deal” in the AI age. “Brand management is no longer just about campaigns — it’s about constantly monitoring and reacting to a living, breathing digital footprint,” he told TechNewsWorld.
“Every customer interaction, review, or leaked internal memo can instantly shape public perception,” he said. “That means brand managers must be part storyteller, part crisis manager, and fully agile. The brand isn’t what you say it is — it’s what the internet says it is.”
Allton noted that AI’s capability to “vet” or “audit” is a powerful reminder that, as AI is integrated into businesses, they must also consider how the external AI ecosystem perceives them. “It’s no longer enough to say you’re trustworthy; the data must reflect it because that data is now incredibly accessible and interpretable by AI,” he wrote.
“Trust used to be built over years and could be lost in moments,” added Lizi Sprague, co-founder of Songue PR, a public relations agency in San Francisco. “Now, with AI, trust can be verified in milliseconds. Every interaction, review, and employee comment becomes part of your permanent trust score.”
She told TechNewsWorld: “AI isn’t replacing reputation managers or comms people; it’s making them more crucial than ever. In an AI-driven world, reputation management evolves from damage control to proactive narrative architecture.”
Proactive Transparency
Brand managers will also need to be more proactive. They need to pay attention to how their brand is represented in the most popular AI tools.
“Brands should be conducting searches that test the way their reputation is represented or conveyed in those tools, and they should be paying attention to the sources that are referenced by AI tools,” said Damian Rollison, director of market insights at SOCi, a marketing solutions company in San Diego.
“If a company focuses a lot on local marketing, they should be paying attention to reviews of a business in Google, Yelp, or TripAdvisor — those kinds of sources — all of which are heavily cited by AI,” he told TechNewsWorld.
“If they’re not paying attention to those reviews and taking action to respond when consumers offer feedback — apologizing if they had a bad experience, offering some kind of remedy, thanking customers when they give you positive feedback — then they have even more reason than ever to pay attention to those reviews and respond to them now.”
Dev Nag, CEO and founder of QueryPal, a customer support chatbot based in San Francisco, explained that an AI-searchable landscape will create persistent accountability. “Every ethical lapse, broken promise, and controversial statement lives on in digital archives, ready to be surfaced by AI at any moment,” he told TechNewsWorld.
“Companies can leverage this AI-scrutinized environment by embracing proactive transparency,” he said. “Organizations should use AI tools to continuously monitor customer sentiment across vast data streams, gaining early warning of reputation risks and identifying improvement areas before issues escalate into crises.”
New Era of AI-Driven Accountability
Nag recommends conducting regular AI reputation audits, doubling down on authenticity, pursuing strong media coverage in respected outlets, empowering employees as reputation ambassadors, implementing AI monitoring with rapid response protocols, and preparing for AI-driven crises, including misinformation attacks.
Transparency without controls, though, can harm a brand. “Doing reputation management well requires a tight focus on the behavior of those who can affect the appearance of the related firm,” said Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore.
“If more transparency is created without these controls and training in place, coupled with strong execution, monitoring, and a strong crisis team, the outcome is likely to be catastrophic,” he told TechNewsWorld.
“AI is now part of the reputation equation,” added Matthew A. Gilbert, a marketing lecturer at Coastal Carolina University.
“It monitors everything, from customer reviews to employee comments,” he told TechNewsWorld. “Brands should treat it as an early warning system and act before issues escalate.”
AI in Branding Demands Action, Not Panic
Allton argued that the rise of AI as a reputation manager isn’t a cause for alarm but a cause for action. However, it does make some demands on businesses. They include:
Non-Negotiable Radical Authenticity
If there are inconsistencies between what your brand promises and what the public data reflects, AI-powered searches will likely highlight them. Your operations must genuinely align with your messaging.“Authenticity is no longer a decision made by brands regarding which cards to reveal; instead, it has become an inevitable force driven by the public, as everything will eventually come to light,” said Reilly Newman, founder and brand strategist at Motif Brands, a brand transformation company, in Paso Robles, Calif. “Authenticity is not merely a new initiative for brands,” he told TechNewsWorld. “It is a necessity and an expected element of any company.”
The “AI Response” Is Your New First Impression
For many, the first true understanding of your business might come from an AI-generated summary, Allton noted. What story is the collective data telling about you?Kseniya Melnikova, a marketing strategist with Melk PR, a marketing agency in Sioux Falls, S.D., recalled a client who believed their low engagement was due to a lack of clear marketing materials.
“Using AI to analyze their community feedback, we discovered the real issue was that customers misunderstood who they were,” she told TechNewsWorld. “They were perceived as a retailer when, in fact, they were an insurance fulfillment service. With this insight, we produced fewer — but clearer — materials that corrected the misunderstanding and improved customer outcomes.”
Human Values Still Drive the Core Code
While AIs process the data, the data itself reflects human experiences and actions, Allton explained. Building a trustworthy business rooted in solid ethical practices provides the best input for any AI assessment.Brand Basics
Businesses that stick to fundamentals, though, shouldn’t have to worry about the new unofficial reputation manager. “Companies need to deliver great products and services and back them up with strong support,” asserted Greg Sterling, co-founder of Near Media, a market research firm in San Francisco.
“Marketing is a separate thing, but their core business and the way they treat their customers need to be very solid and reliable,” he told TechNewsWorld. “Marketing and brand campaigns can then be built on top of that fundamental authenticity and ethical conduct, which will be reflected in AI results.”
“I think people get very confused about what makes a successful business, and they’re focused on tips and tricks and marketing manipulation,” he said. “Great marketing is built on great products and services. Great brands are built by delivering great products and services, being consistent, and treating customers well. That’s the core proposition that everything else flows out of.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
More by John P. Mello Jr.
view all
More in Artificial Intelligence
المصدر: https://www.technewsworld.com/story/ai-is-rewriting-the-rules-of-brand-management-179737.html?rss=1AI Is Rewriting the Rules of Brand ManagementAI Is Rewriting the Rules of Brand Management By John P. Mello Jr. May 13, 2025 5:00 AM PT ADVERTISEMENT Build HubSpot Apps, Faster New developer products preview the future of app building on HubSpot, including deeper extensibility, flexible UI, modern prototyping tools, and more. Learn More. Although building trust through a carefully crafted brand message is still important, artificial intelligence may be undermining its traditional influence. “AI isn’t just helping businesses create content or automate tasks; it’s empowering individuals to become instant digital detectives,” Mike Allton, chief storyteller at Agorapulse, a social media management platform for businesses, wrote Monday on LinkedIn. What that means, he explained, is a company’s entire digital history — reviews, articles, social media sentiment, even employee feedback — is now more transparent and instantly “queryable” than ever before. “The carefully crafted brand message? It’s still important, but AI can now cross-reference it with raw, aggregated public data in seconds,” he noted. Edwin Miller, CEO of Marchex, a conversation intelligence platform maker headquartered in Seattle, explained that the rise of large language models and real-time data analytics has effectively turned a company’s full digital footprint into a searchable, easy-to-interpret, and evaluative source of truth. “We’re entering a world where a company’s entire identity, how it treats customers, how it responds to criticism, what employees really think, and how well it delivers on its promises, can be surfaced instantly by AI,” he told TechNewsWorld. “And not just by researchers or journalists, but by consumers, investors, and competitors.” “This means companies no longer control the brand narrative the way they used to,” he said. “The narrative is now co-authored by customers, employees, and digital observers, with AI acting as a kind of omnipresent interpreter. That changes the playing field for brand management entirely.” AI Shrinks Trust-Building to Milliseconds Mark N. Vena, president and principal analyst for SmartTech Research in Las Vegas, argued that brand management is a “huge deal” in the AI age. “Brand management is no longer just about campaigns — it’s about constantly monitoring and reacting to a living, breathing digital footprint,” he told TechNewsWorld. “Every customer interaction, review, or leaked internal memo can instantly shape public perception,” he said. “That means brand managers must be part storyteller, part crisis manager, and fully agile. The brand isn’t what you say it is — it’s what the internet says it is.” Allton noted that AI’s capability to “vet” or “audit” is a powerful reminder that, as AI is integrated into businesses, they must also consider how the external AI ecosystem perceives them. “It’s no longer enough to say you’re trustworthy; the data must reflect it because that data is now incredibly accessible and interpretable by AI,” he wrote. “Trust used to be built over years and could be lost in moments,” added Lizi Sprague, co-founder of Songue PR, a public relations agency in San Francisco. “Now, with AI, trust can be verified in milliseconds. Every interaction, review, and employee comment becomes part of your permanent trust score.” She told TechNewsWorld: “AI isn’t replacing reputation managers or comms people; it’s making them more crucial than ever. In an AI-driven world, reputation management evolves from damage control to proactive narrative architecture.” Proactive Transparency Brand managers will also need to be more proactive. They need to pay attention to how their brand is represented in the most popular AI tools. “Brands should be conducting searches that test the way their reputation is represented or conveyed in those tools, and they should be paying attention to the sources that are referenced by AI tools,” said Damian Rollison, director of market insights at SOCi, a marketing solutions company in San Diego. “If a company focuses a lot on local marketing, they should be paying attention to reviews of a business in Google, Yelp, or TripAdvisor — those kinds of sources — all of which are heavily cited by AI,” he told TechNewsWorld. “If they’re not paying attention to those reviews and taking action to respond when consumers offer feedback — apologizing if they had a bad experience, offering some kind of remedy, thanking customers when they give you positive feedback — then they have even more reason than ever to pay attention to those reviews and respond to them now.” Dev Nag, CEO and founder of QueryPal, a customer support chatbot based in San Francisco, explained that an AI-searchable landscape will create persistent accountability. “Every ethical lapse, broken promise, and controversial statement lives on in digital archives, ready to be surfaced by AI at any moment,” he told TechNewsWorld. “Companies can leverage this AI-scrutinized environment by embracing proactive transparency,” he said. “Organizations should use AI tools to continuously monitor customer sentiment across vast data streams, gaining early warning of reputation risks and identifying improvement areas before issues escalate into crises.” New Era of AI-Driven Accountability Nag recommends conducting regular AI reputation audits, doubling down on authenticity, pursuing strong media coverage in respected outlets, empowering employees as reputation ambassadors, implementing AI monitoring with rapid response protocols, and preparing for AI-driven crises, including misinformation attacks. Transparency without controls, though, can harm a brand. “Doing reputation management well requires a tight focus on the behavior of those who can affect the appearance of the related firm,” said Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore. “If more transparency is created without these controls and training in place, coupled with strong execution, monitoring, and a strong crisis team, the outcome is likely to be catastrophic,” he told TechNewsWorld. “AI is now part of the reputation equation,” added Matthew A. Gilbert, a marketing lecturer at Coastal Carolina University. “It monitors everything, from customer reviews to employee comments,” he told TechNewsWorld. “Brands should treat it as an early warning system and act before issues escalate.” AI in Branding Demands Action, Not Panic Allton argued that the rise of AI as a reputation manager isn’t a cause for alarm but a cause for action. However, it does make some demands on businesses. They include: Non-Negotiable Radical Authenticity If there are inconsistencies between what your brand promises and what the public data reflects, AI-powered searches will likely highlight them. Your operations must genuinely align with your messaging.“Authenticity is no longer a decision made by brands regarding which cards to reveal; instead, it has become an inevitable force driven by the public, as everything will eventually come to light,” said Reilly Newman, founder and brand strategist at Motif Brands, a brand transformation company, in Paso Robles, Calif. “Authenticity is not merely a new initiative for brands,” he told TechNewsWorld. “It is a necessity and an expected element of any company.” The “AI Response” Is Your New First Impression For many, the first true understanding of your business might come from an AI-generated summary, Allton noted. What story is the collective data telling about you?Kseniya Melnikova, a marketing strategist with Melk PR, a marketing agency in Sioux Falls, S.D., recalled a client who believed their low engagement was due to a lack of clear marketing materials. “Using AI to analyze their community feedback, we discovered the real issue was that customers misunderstood who they were,” she told TechNewsWorld. “They were perceived as a retailer when, in fact, they were an insurance fulfillment service. With this insight, we produced fewer — but clearer — materials that corrected the misunderstanding and improved customer outcomes.” Human Values Still Drive the Core Code While AIs process the data, the data itself reflects human experiences and actions, Allton explained. Building a trustworthy business rooted in solid ethical practices provides the best input for any AI assessment.Brand Basics Businesses that stick to fundamentals, though, shouldn’t have to worry about the new unofficial reputation manager. “Companies need to deliver great products and services and back them up with strong support,” asserted Greg Sterling, co-founder of Near Media, a market research firm in San Francisco. “Marketing is a separate thing, but their core business and the way they treat their customers need to be very solid and reliable,” he told TechNewsWorld. “Marketing and brand campaigns can then be built on top of that fundamental authenticity and ethical conduct, which will be reflected in AI results.” “I think people get very confused about what makes a successful business, and they’re focused on tips and tricks and marketing manipulation,” he said. “Great marketing is built on great products and services. Great brands are built by delivering great products and services, being consistent, and treating customers well. That’s the core proposition that everything else flows out of.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. More by John P. Mello Jr. view all More in Artificial Intelligence المصدر: https://www.technewsworld.com/story/ai-is-rewriting-the-rules-of-brand-management-179737.html?rss=1WWW.TECHNEWSWORLD.COMAI Is Rewriting the Rules of Brand ManagementAI Is Rewriting the Rules of Brand Management By John P. Mello Jr. May 13, 2025 5:00 AM PT ADVERTISEMENT Build HubSpot Apps, Faster New developer products preview the future of app building on HubSpot, including deeper extensibility, flexible UI, modern prototyping tools, and more. Learn More. Although building trust through a carefully crafted brand message is still important, artificial intelligence may be undermining its traditional influence. “AI isn’t just helping businesses create content or automate tasks; it’s empowering individuals to become instant digital detectives,” Mike Allton, chief storyteller at Agorapulse, a social media management platform for businesses, wrote Monday on LinkedIn. What that means, he explained, is a company’s entire digital history — reviews, articles, social media sentiment, even employee feedback — is now more transparent and instantly “queryable” than ever before. “The carefully crafted brand message? It’s still important, but AI can now cross-reference it with raw, aggregated public data in seconds,” he noted. Edwin Miller, CEO of Marchex, a conversation intelligence platform maker headquartered in Seattle, explained that the rise of large language models and real-time data analytics has effectively turned a company’s full digital footprint into a searchable, easy-to-interpret, and evaluative source of truth. “We’re entering a world where a company’s entire identity, how it treats customers, how it responds to criticism, what employees really think, and how well it delivers on its promises, can be surfaced instantly by AI,” he told TechNewsWorld. “And not just by researchers or journalists, but by consumers, investors, and competitors.” “This means companies no longer control the brand narrative the way they used to,” he said. “The narrative is now co-authored by customers, employees, and digital observers, with AI acting as a kind of omnipresent interpreter. That changes the playing field for brand management entirely.” AI Shrinks Trust-Building to Milliseconds Mark N. Vena, president and principal analyst for SmartTech Research in Las Vegas, argued that brand management is a “huge deal” in the AI age. “Brand management is no longer just about campaigns — it’s about constantly monitoring and reacting to a living, breathing digital footprint,” he told TechNewsWorld. “Every customer interaction, review, or leaked internal memo can instantly shape public perception,” he said. “That means brand managers must be part storyteller, part crisis manager, and fully agile. The brand isn’t what you say it is — it’s what the internet says it is.” Allton noted that AI’s capability to “vet” or “audit” is a powerful reminder that, as AI is integrated into businesses, they must also consider how the external AI ecosystem perceives them. “It’s no longer enough to say you’re trustworthy; the data must reflect it because that data is now incredibly accessible and interpretable by AI,” he wrote. “Trust used to be built over years and could be lost in moments,” added Lizi Sprague, co-founder of Songue PR, a public relations agency in San Francisco. “Now, with AI, trust can be verified in milliseconds. Every interaction, review, and employee comment becomes part of your permanent trust score.” She told TechNewsWorld: “AI isn’t replacing reputation managers or comms people; it’s making them more crucial than ever. In an AI-driven world, reputation management evolves from damage control to proactive narrative architecture.” Proactive Transparency Brand managers will also need to be more proactive. They need to pay attention to how their brand is represented in the most popular AI tools. “Brands should be conducting searches that test the way their reputation is represented or conveyed in those tools, and they should be paying attention to the sources that are referenced by AI tools,” said Damian Rollison, director of market insights at SOCi, a marketing solutions company in San Diego. “If a company focuses a lot on local marketing, they should be paying attention to reviews of a business in Google, Yelp, or TripAdvisor — those kinds of sources — all of which are heavily cited by AI,” he told TechNewsWorld. “If they’re not paying attention to those reviews and taking action to respond when consumers offer feedback — apologizing if they had a bad experience, offering some kind of remedy, thanking customers when they give you positive feedback — then they have even more reason than ever to pay attention to those reviews and respond to them now.” Dev Nag, CEO and founder of QueryPal, a customer support chatbot based in San Francisco, explained that an AI-searchable landscape will create persistent accountability. “Every ethical lapse, broken promise, and controversial statement lives on in digital archives, ready to be surfaced by AI at any moment,” he told TechNewsWorld. “Companies can leverage this AI-scrutinized environment by embracing proactive transparency,” he said. “Organizations should use AI tools to continuously monitor customer sentiment across vast data streams, gaining early warning of reputation risks and identifying improvement areas before issues escalate into crises.” New Era of AI-Driven Accountability Nag recommends conducting regular AI reputation audits, doubling down on authenticity, pursuing strong media coverage in respected outlets, empowering employees as reputation ambassadors, implementing AI monitoring with rapid response protocols, and preparing for AI-driven crises, including misinformation attacks. Transparency without controls, though, can harm a brand. “Doing reputation management well requires a tight focus on the behavior of those who can affect the appearance of the related firm,” said Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore. “If more transparency is created without these controls and training in place, coupled with strong execution, monitoring, and a strong crisis team, the outcome is likely to be catastrophic,” he told TechNewsWorld. “AI is now part of the reputation equation,” added Matthew A. Gilbert, a marketing lecturer at Coastal Carolina University. “It monitors everything, from customer reviews to employee comments,” he told TechNewsWorld. “Brands should treat it as an early warning system and act before issues escalate.” AI in Branding Demands Action, Not Panic Allton argued that the rise of AI as a reputation manager isn’t a cause for alarm but a cause for action. However, it does make some demands on businesses. They include: Non-Negotiable Radical Authenticity If there are inconsistencies between what your brand promises and what the public data reflects, AI-powered searches will likely highlight them. Your operations must genuinely align with your messaging.“Authenticity is no longer a decision made by brands regarding which cards to reveal; instead, it has become an inevitable force driven by the public, as everything will eventually come to light,” said Reilly Newman, founder and brand strategist at Motif Brands, a brand transformation company, in Paso Robles, Calif. “Authenticity is not merely a new initiative for brands,” he told TechNewsWorld. “It is a necessity and an expected element of any company.” The “AI Response” Is Your New First Impression For many, the first true understanding of your business might come from an AI-generated summary, Allton noted. What story is the collective data telling about you?Kseniya Melnikova, a marketing strategist with Melk PR, a marketing agency in Sioux Falls, S.D., recalled a client who believed their low engagement was due to a lack of clear marketing materials. “Using AI to analyze their community feedback, we discovered the real issue was that customers misunderstood who they were,” she told TechNewsWorld. “They were perceived as a retailer when, in fact, they were an insurance fulfillment service. With this insight, we produced fewer — but clearer — materials that corrected the misunderstanding and improved customer outcomes.” Human Values Still Drive the Core Code While AIs process the data, the data itself reflects human experiences and actions, Allton explained. Building a trustworthy business rooted in solid ethical practices provides the best input for any AI assessment.Brand Basics Businesses that stick to fundamentals, though, shouldn’t have to worry about the new unofficial reputation manager. “Companies need to deliver great products and services and back them up with strong support,” asserted Greg Sterling, co-founder of Near Media, a market research firm in San Francisco. “Marketing is a separate thing, but their core business and the way they treat their customers need to be very solid and reliable,” he told TechNewsWorld. “Marketing and brand campaigns can then be built on top of that fundamental authenticity and ethical conduct, which will be reflected in AI results.” “I think people get very confused about what makes a successful business, and they’re focused on tips and tricks and marketing manipulation,” he said. “Great marketing is built on great products and services. Great brands are built by delivering great products and services, being consistent, and treating customers well. That’s the core proposition that everything else flows out of.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. More by John P. Mello Jr. view all More in Artificial Intelligence25 Comments 0 Shares -
WWW.TECHNEWSWORLD.COMAI and the Algorithmic Muse: Entertainment’s Next ActHold onto your popcorn, folks, because artificial intelligence (AI) is about to take our beloved entertainment for a joyride, and it probably won’t ask for directions. We’re talking about a future where AI isn’t just fetching your slippers (it might do that too; who knows?) but is actively involved in crafting stories that make us laugh, cry, or hide behind the sofa. From eBooks that read you back, to movies that might just star you (unintentionally or otherwise), the digital script is being rewritten. So, grab your (soon-to-be AI-recommended) snack of choice, and let’s peek at what the future of fun, powered by our increasingly clever silicon sidekicks, might look like. We’ll close with my product of the week, the new Violet (well, really Purple) Surface Pro from Microsoft. AI Personalization and Entertainment Preferences Right now, AI in entertainment is like that friend who thinks they know your taste because you watched one cat video, and now your feed is an endless feline festival. Platforms like Netflix and Spotify are doing their best with AI-powered recommendations, trying to guess what movie or song will tickle your fancy. But bless their digital hearts, it’s often a bit of a crapshoot. Future AI, however, aims to be your entertainment soulmate. It won’t just know you like sci-fi; it’ll understand your particular craving for “1950s-style alien invasion comedies but with surprisingly good character development for the pug.” It’ll be like having a tiny, all-knowing librarian or film critic living in your phone, unearthing obscure indie eBooks, or that one bizarre international TV show that speaks directly to your quirky soul. Finally, a way to find stories so niche you’ll wonder if the AI just wrote them for you on the spot. AI-Driven Adaptation in Books and Movies Get ready for stories less set in stone and more like Silly Putty. For eBooks, imagine a novel that senses you’re doom-scrolling instead of reading. With your say-so, the AI might subtly tweak the plot: maybe that boring council meeting gets a surprise dragon attack, or that side character you secretly adore suddenly gets a heroic monologue. It’s not about the AI ghostwriting a new ending because it didn’t like the original, but more like offering a director’s cut tailored to your brainwaves. TV shows and streamed movies will get the same treatment. Is the family movie night hitting a snag because little Timmy is terrified of the friendly ghost? The AI might gently dim the spooky sound effects or add a reassuring voiceover from a cartoon squirrel. Or if you, a connoisseur of crime dramas, are ten steps ahead of the detective, the AI could weave in a red herring so fiendishly clever you’ll tip your fedora. Such AI-powered storytelling will craft narratives so adaptive they’ll feel like they were made just for your Tuesday night. AI Personalization in Film and TV Experiences Cinema night is about to get a whole lot more personal. First up, AI could sprinkle in some local flavor. Imagine watching a superhero movie, and the obligatory coffee shop scene features a digital version of your town’s beloved, slightly sticky doughnut emporium. But why stop there? The real fun begins with you in the picture. With your permission (and a waiver, probably), your face could appear digitally as an extra in the crowd scene, or your name might be subtly dropped by a minor character ordering a latte. “Table for Smith, party of four? Your intergalactic overlord will see you now.” For the truly adventurous, picture this: the AI gauges the audience’s collective gasps, laughs, or even synchronized eye-rolls and nudges the plot. “Hmm, they didn’t buy that plot twist; deploy alternate ending B!” This is way beyond just yelling at the screen; it’s the screen (politely) listening back, an extension of how AI already tries to understand viewer preferences and tailor content. Every showing could be a fresh adventure! Smarter NPCs and Adaptive Game Design Video games are where AI is really flexing its muscles, and the future looks hilariously immersive. Human-Like NPCs: Forget those Non-Player Characters who repeat the same three lines of dialogue even after you’ve saved their village from a meteor. Future NPCs will have memories. They’ll remember you “accidentally” sold their prize-winning turnip. They’ll hold grudges. They’ll form opinions. They might even start a book club without you. We’re talking NPCs with their own sitcom-worthy lives. Hyper-Realistic Avatars: Your in-game avatar won’t just look like a slightly shinier version of you; it’ll emote like you. Imagine trying to bluff your way through a space poker game when your avatar keeps nervously sweating digital bullets. AI will drive character customization to uncanny levels. Dynamically Altered Content: Is that boss battle making you want to throw your controller through the screen? A benevolent AI might subtly suggest a new strategy through an “ancient cryptic clue” (that it just generated) or maybe even trip the boss on a conveniently placed banana peel. The goal is to make games so adaptive that they practically read your mind, keeping you hooked instead of ready to throw your controller.. Observer Influence: Even watching games could become interactive. Imagine your Twitch chat collectively voting to unleash a flock of slightly confused pigeons into your friend’s meticulously planned stealth mission. Harmless chaos, managed by AI! AI Collaboration and Audience Contributions If your brilliant, shouted-at-the-screen suggestion for how the detective could finally solve the case makes its way into the dynamically adapting TV show and boosts ratings, should you get a cut? It’s a thought! As AI lets audiences nudge stories or even contribute game-changing ideas, we might see new ways to reward those golden nuggets of accidental genius. Forget royalties; maybe you get a “Narrative Nudge” bonus or your name in the AI-generated “Special Thanks to that Shouter in Row G” credits. Navigating IP and copyright will be a hilarious legal tangle. Still, a future where your brilliant shower thoughts actually improve a blockbuster isn’t totally off the table, ushering in a truly collaborative (and potentially litigious) entertainment ecosystem. Transmedia Storytelling Now, let’s get really wild. Imagine your day starts with an eBook whose plot thickens based on your morning mood — as determined by how aggressively you hit the snooze button. At a key moment, the eBook suggests, “Want to see how this daring escape really goes down?” Bam! You’re playing a short, intense video game sequence on your phone. Later, that game’s outcome subtly alters the character dynamics in the TV show adaptation you watch that evening. Your personal “entertainment AI,” — let’s call it “Hal” for kicks; what could go wrong? — learns your tastes across everything. It then weaves a grand, personalized narrative that flows between your Kindle, smart TV, game console, and even that interactive movie you saw last week. Characters you love could show up in unexpected places, their storylines intertwining across platforms, all shaped by your choices and the AI’s increasingly insightful (or hilariously off-base) understanding of what makes you tick. This transmedia storytelling, supercharged by AI, would be less like watching a story and more like living in your own, ever-changing, AI-fueled epic. Wrapping Up: AI’s Entertainment Future So, is AI coming for our entertainment? You bet your algorithm it is! It promises a future where stories are more personal, more interactive, and possibly more delightfully bizarre than ever before. There will be ethical quandaries, creative debates, and probably a few AI-generated movie scripts that are so bad they’re good. One thing’s for sure: the future of entertainment looks less like a pre-recorded show and more like an improvisational comedy act where we all, willingly or not, get to be part of the cast. Now, if you’ll excuse me, my AI just recommended a documentary about the history of sporks. Wish me luck. The New Microsoft Surface Pro: Purple Reign Surface Pro 12-inch, Violet (Image Credit: Microsoft) My wife and I have a thing for purple. Not just any purple, but that deep, vibrant, makes-you-smile kind of purple. So, imagine our royal delight when Microsoft unveiled its latest Surface Pro – the brand new 12-inch Surface Pro Copilot+ PC — and lo and behold, among the standard Platinum and a rather fetching ‘Ocean’ blue-green, sits a glorious Violet option. After years of Surface devices playing it safe with silvers and blacks, this one feels like our Surface. A Brief History Lesson Let’s be honest: the Surface line was born largely because the iPad was eating everyone’s lunch back in 2012. Microsoft’s vision was ambitious: create a tablet that could truly replace your laptop, running full-fat Windows. It was a noble quest, but the path was rocky. Remember Windows RT? Shudder. For years, Surface devices felt like brilliant ideas slightly hampered by compromises: battery life, app compatibility (especially on early Arm versions), or trying to convince people they weren’t just chunkier iPads. Powerful and versatile, yes — but not always fun. Beyond a Color Refresh, It’s a True Upgrade This new 12-inch Surface Pro feels different. It’s incredibly thin and light, rocking a new Arm-based Snapdragon X Plus processor. Now, Arm on Windows has had a checkered past, but this is part of the new “Copilot+ PC” wave, promising significant performance leaps and killer battery life (up to 16 hours claimed!), plus integrated AI features like Recall that might actually be useful; or slightly creepy. The jury’s still out. Coupled with the excellent (and still sold separately, sigh) Surface Pro Keyboard, this finally feels like the ultra-portable, do-anything Windows machine we’ve been waiting for — and did I mention it comes in Violet? It just ties the whole elegant package together. Who Should Consider the New Surface Pro? Okay, besides people like us with impeccable taste in colors, who is this for? It’s ideal for mobile professionals who need the full power of Windows desktop apps in a super-portable form factor. Students who need something light for lectures but can run serious software will love it. Creatives who prefer Windows-based tools over iPadOS limitations should definitely take a look. If you want the ultimate blend of tablet portability and laptop capability and the Apple ecosystem isn’t your jam, this Surface Pro makes a compelling case. Cost and Availability Here’s the catch: it’s not cheap, especially once you add the essential keyboard. The new 12-inch Surface Pro Copilot+ PC starts at $799 (tablet only). If you want that gorgeous Violet (or Ocean), you need to step up to the 512GB storage model, which is $899. Pre-orders are open now, with devices shipping starting May 20. Despite the separate keyboard cost, this sleek, powerful, and finally purple machine hits a sweet spot. Microsoft’s new 12-inch Surface Pro Copilot+ PC in Violet is, without a doubt, my Product of the Week. Microsoft, you finally did it!0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMMatter and Infineon Redefine Smart Home Security StandardsIn the rapidly expanding world of smart homes, the spotlight has long shone on convenience, connectivity, and compatibility. But as major ecosystems like Amazon Alexa, Google Home, and Apple HomeKit compete for dominance, one initiative — Matter, backed by the Connectivity Standards Alliance (CSA) — is quietly transforming the smart home not only through interoperability but also through an equally vital and underappreciated pillar: security. For Infineon Technologies, Matter’s emphasis on built-in, standardized security isn’t just a feature — it’s central to the company’s long-term strategy. In a recent interview with Infineon’s Steve Hanna, a veteran in IoT security and a leading voice within the Matter initiative, he explained why this often “unsexy” aspect of smart home design may be its most important. Infineon’s Global Strategy and Role in IoT Infineon Technologies is a leading global semiconductor company headquartered in Germany, known for delivering secure, energy-efficient solutions across automotive, industrial, and consumer markets. The company plays a pivotal role in the Internet of Things (IoT) ecosystem by providing embedded security, power management, and connectivity technologies. Infineon’s broad portfolio supports smart home, smart factory, and edge computing applications, enabling reliable and scalable IoT deployments. Its long-term commitment to security and innovation has made it a trusted partner for manufacturers building the next generation of connected devices. Infineon’s overarching corporate strategy is built around two global megatrends: decarbonization and digitalization. The smart home lives squarely at the intersection of these two forces. “Everything is becoming smart,” Hanna said, “not just in the home, but in the workplace, factories, farms — everywhere.” That connectivity can help optimize energy use, reduce waste, and streamline operations. However, it also creates new entry points for cyberattacks, especially as devices become more connected and embedded in daily life. From Infineon’s perspective, it was clear early on that the success of a smart home standard would depend not just on ease of use but on security from day one. So, when the then-Zigbee Alliance began developing a new open standard, Infineon committed to a seat at the table. “When we saw Apple, Amazon, Google, Samsung, and others get involved, we knew this would be big,” Hanna said. “We saw it as the TCP/IP moment for the smart home.” Matter Built Security Into the Standard From Day One Unlike past attempts at unifying the smart home — many of which faltered due to proprietary lock-in or lack of coordination — Matter was built from the ground up on four core principles, one of which is security. That foundational design choice makes Matter fundamentally different. Hanna explained that Matter’s security standards are mandatory, baked in, and non-optional — a significant departure from previous device ecosystems. “Just like your browser won’t access an unsecured site anymore, Matter devices won’t operate without encryption and authentication,” he said. In practical terms, Matter 1.0 includes 10 baseline security features, including secure onboarding, device attestation, encrypted communication, automatic software updates, and more. These aren’t value-adds or premium options — they’re standard requirements for certification. Image courtesy of Infineon Technologies And crucially, manufacturers can’t disable them. Why Security Matters, Even If Consumers Don’t See It Security may not sell smart speakers, but it protects what matters most: consumer trust. Hanna framed the issue in stark terms: “Any connected device becomes a potential attack surface. Lightbulbs, thermostats, locks — if they’re online, they must be secure.” A common vulnerability? Firmware that doesn’t get updated. “Many consumers don’t update their devices,” Hanna noted, “and many manufacturers avoid automatic updates to reduce support costs.” That creates real risk, as attackers routinely target unpatched devices to build botnets or gain access to home networks. Image courtesy of Infineon Technologies Matter solves this with mandatory, secure, seamless, and standardized over-the-air (OTA) updates. The process is transparent to the end user but critical for long-term protection. Even better, the CSA recently introduced a rapid recertification process so manufacturers can deploy critical patches quickly, bypassing long delays that might otherwise expose homes. Certification, Compliance, Build Consumer Confidence To carry the Matter logo, a device must pass rigorous automated testing. Updates must be re-certified by an authorized testing lab or, for trained manufacturers, through an internal automated process that ensures compliance. This framework gives consumers peace of mind but also relieves manufacturers of the burden of customizing devices for each ecosystem. “In the past, developers had to create four different versions of a device to work with Apple, Google, Amazon, and Samsung,” Hanna said. “Matter consolidates that into one universal standard.” That consolidation doesn’t just simplify development — it also ensures a consistent security posture across all certified devices. Infineon’s Role: Security at the Silicon Level As a hardware company, Infineon brings deep expertise in embedded security to the Matter ecosystem. Its secure microcontrollers and cryptographic modules are explicitly designed to support Matter-compliant features, from onboarding to key storage and device attestation. “We’ve been working on IoT security for over 30 years,” said Hanna. “We’re not just implementing standards — we’re helping to shape them.” Infineon’s secure elements are now used in smart locks, lighting, appliances, and more, helping manufacturers implement Matter with minimal friction. Importantly, Infineon products also support advanced features like device identity attestation, enabling ecosystems to detect and block counterfeit or cloned devices — an increasingly serious threat as smart home adoption grows. AI and Revocation Advance Smart Home Security Hanna emphasized that Matter isn’t static. The standard continuously evolves, with new capabilities rolling out each release — a recent example being device revocation. Suppose a cloned or compromised device is detected in the wild. If a cloned or compromised device is detected in the wild, Matter-compliant ecosystems can now block it — a critical protection against supply chain attacks and spoofed devices. Looking ahead, AI adds both complexity and opportunity. “The bad guys are already using AI to find vulnerabilities,” Hanna said. “So, the good guys need to use AI to detect and respond to them faster.” Additionally, as AI becomes embedded in more devices — think learning thermostats or voice-controlled assistants — ensuring the AI is secure becomes another layer of responsibility. Conclusion: Security Is the Killer Feature You Don’t See Matter’s built-in security may be its most disruptive and lasting contribution in a market dominated by hype over compatibility and convenience. It provides the invisible foundation on which trust, privacy, and longevity depend. For companies like Infineon, that’s not just a compliance issue — it’s a competitive advantage. “We believe Matter will do for the smart home what Bluetooth and Wi-Fi did for wireless,” Hanna said. “But it has to be secure from the beginning — and stay secure. That’s what Infineon is committed to delivering.” So, while consumers may never ask if their smart bulb or doorbell has secure firmware updates or cryptographic identity, they’ll benefit from it whenever they turn on the lights, lock the door, or adjust the thermostat. Behind the scenes, Infineon — and the Matter initiative — will ensure those simple moments stay safe, reliable, and smart.0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMWaymo Builds Arizona Factory To Grow Robotaxi FleetWaymo Builds Arizona Factory To Grow Robotaxi Fleet By John P. Mello Jr. May 7, 2025 5:00 AM PT The 5th-generation Waymo Driver atop an all-electric Jaguar I-Pace SUV, part of Waymo’s current robotaxi fleet. (Source: Waymo) ADVERTISEMENT Rubrik Foward 2025: The Future of Cyber Resilience is here When an attacker comes for your business, will you be ready? Chart your path to cyber resilience and keep your business running. June 4 | Virtual Event | Register Now Waymo, the autonomous vehicle arm of Google’s parent company, Alphabet, revealed Monday that it plans to expand its manufacturing capacity in Arizona to add 2,000 more robotaxis to its fleet. The company explained in a blog that it’s partnering with Magna International, a Canadian auto parts maker, to build a 239,000-square-foot facility in Mesa to retrofit more than 2,000 Jaguar I-Pace electric SUVs with Waymo’s autonomous driving technology. The vehicles will be used to expand Waymo’s robotaxi services to three new metro areas in 2026: Atlanta, Miami, and Washington, D.C. Currently, Waymo has robotaxi operations in Phoenix, San Francisco, Los Angeles, and Austin, Texas. According to Waymo, the new facility’s flexible design will enable it to integrate the 6th-generation Waymo Driver on new vehicle platforms, beginning with the Zeekr RT model later this year. Waymo’s next-gen rider-first robotaxi platform, developed with Zeekr, will debut the 6th-generation Waymo Driver. (Source: Waymo) Waymo explained that the plant will introduce an automated assembly line and other efficiencies over time because it needs to build multiple platforms simultaneously and at higher volumes. It noted that when the facility is operating at full capacity, it will be capable of building tens of thousands of fully autonomous Waymo vehicles per year. “We have made great strides in steadily expanding Waymo One ride-hailing service to multiple commercial markets, and now is the time to scale our technology further to support existing and future deployments,” Waymo Product Communications Manager Chris Bonelli told TechNewsWorld. Significant Announcement “Waymo’s announcement is a significant milestone for the autonomous driving industry,” said Kathleen Rizk, senior director of user experience benchmarking and technology at J.D. Power, a consumer research, data, and analytics firm, in Troy, Mich. “It demonstrates their unwavering commitment to advancing this technology, even as other companies have withdrawn from the market due to various challenges,” she told TechNewsWorld. Edward Sanchez, a senior analyst in the automotive practice of TechInsights, a global technology intelligence company, added that the move has market benefits for Waymo. “This announcement solidifies Waymo’s current position as the leading autonomous ride-hailing provider in the U.S. and signifies the company’s growth ambitions,” he told TechNewsWorld. The commitment to manufacturing could also have broad market benefits, contended Seth Goldstein, an equity strategist and chair of the Electric Vehicle Committee at Morningstar Research Services in Chicago. “Any announcement by a company to build a dedicated robotaxi facility is positive for the broader industry, as it will help accelerate AV [autonomous vehicle] adoption,” he told TechNewsWorld. Sam Abuelsamid, vice president of market research at Telemetry, a Detroit-based transportation research and advisory company, acknowledged that adding additional capacity to up-fit vehicles with the Waymo Driver is a good sign for Waymo’s progress in deploying their robotaxis, but argued it doesn’t really say anything about the rest of the industry at this stage. “Waymo is at least confident enough in the safety of its system and customer acceptance that it is willing to invest in significantly expanding its fleet to support operations in more cities,” he told TechNewsWorld. “Whether anyone else is able to do this anytime soon remains to be seen, but Zoox is likely to be next with its purpose-built robotaxis.” Zoox, which was acquired by Amazon in 2020, offers robotaxi services in the San Francisco Bay Area, Seattle, and Las Vegas. It uses vehicles built explicitly for autonomous ride-hailing rather than retrofitting existing vehicles for that purpose. Market Leader Abuelsamid noted that in North America, Waymo is the clear leader in developing robust automated driving systems (ADS) and the back-end services required to run an operation. They’ve also developed fairly healthy relationships with local authorities to integrate the systems with first responders, get curb access, and overcome other challenges. “Zoox is likely to be next in line to deploy commercially,” he said. “Tesla aims to launch a service in Austin next month, but I don’t see evidence that it’s close to being able to safely operate without supervision yet. My guess is they will be operating with a remote driver supervisor on each car to be ready to disengage at any moment, so it would technically still be a Level 2 system, like what they have today.” A Level 2 autonomous driving vehicle is equipped with advanced driver assistance systems (ADAS) that simultaneously control both steering and acceleration/deceleration. However, the human driver must remain engaged and ready to take over at any time. Morningstar technology equity analyst Malik Ahmed Khan noted that Waymo is quite well-positioned in the competitive autonomous vehicle market. “It is the only paid AV taxi service right now doing more than 250,000 rides a week,” he told TechNewsWorld. Khan explained that Waymo’s business model is quite scalable, especially as the firm’s Hyundai/Zeekr vehicles help drive down costs. Interior view of Waymo’s rider-first vehicle platform, designed for autonomous ride-hailing in partnership with Zeekr. (Source: Waymo) Safety Still a Concern for AV Adoption It also differentiates itself from competitors in a number of ways, he added. “Certainly, there is a first-mover/brand awareness differentiation,” Khan observed. Waymo also has deep pockets. “Having a well-capitalized backer such as Alphabet helps, especially when it comes to differentiation against smaller companies with limited access to capital,” Khan said. “Word of mouth and perceptions of safety matter, and Waymo’s strong penetration into its markets shows that an increasing number of riders are on board Waymo from a safety perspective,” he added. Rizk noted that according to J.D. Power’s 2024 U.S. Robotaxi Experience Study, safety concerns are a significant obstacle to consumer acceptance of autonomous driving. “Consumers want to see federal and state regulations to support this industry,” she said. “Furthermore,” she continued, “safety concerns impact consumer comfort with fully automated self-driving vehicles, and our study findings indicate that 83% of consumers desire safety statistics about the technology before using autonomous vehicles. These safety statistics should not only include data but also demonstrate a proven track record over time, validating consistent performance.” She added that consumers also have concerns that automated vehicle technology may negatively affect data privacy and be susceptible to hacking, as over 80% want to know what measures are being taken to prevent the hacking of AVs. Mainstream AV Use Still a Decade Away With its investment in new manufacturing capacity, Waymo clearly has its eyes focused on a mainstream future. However, that future may be slow to develop. Vehicle deployment, for instance, will be spotty. “The pace of deployment for autonomous vehicles varies widely by region,” TechInsights’ Sanchez said. “In the U.S., it is largely a state-by-state proposition in terms of approval, whereas in markets such as China with more centralized control, approval can be granted on a country-wide basis.” “Even in the EU,” he added, “there are still regional restrictions, although the European Commission is looking to streamline the approval process to allow for multi-country operation of autonomous vehicles.” Morningstar forecasts autonomous vehicles will see rapid adoption growth over the next decade and make up 50% of all ride-hailing rides by 2035 in the U.S. and Canada. “We expect AVs will roll out in a city-by-city approach, the same way Uber and Lyft did last decade,” Goldstein said. “Our bullish outlook is driven by our view that AVs will be cheaper than traditional ride-hailing rides, leading to higher adoption,” he continued. “We also see barriers to adoption falling in the coming years to drive higher growth.” Telemetry’s Abuelsamid also doesn’t expect autonomous vehicles to become common before the end of the decade. “While strides have been made on the hardware cost, the operational costs of a robotaxi system are still very high, and not even Waymo is close to profitability,” he explained. “There is also the problem of utilization because robotaxi demand is uneven during the day, so finding a way to use the vehicles for tasks like deliveries during low-demand periods will be essential,” he said. “I think applications like robo-shuttles to support and expand transit services, long-haul trucking, and middle-mile deliveries make more sense from a business perspective in the near term.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Transportation0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMChatbots Having Minimal Impact on Search Engine Traffic: StudyChatbots Having Minimal Impact on Search Engine Traffic: Study By John P. Mello Jr. May 6, 2025 5:00 AM PT ADVERTISEMENT Secure Mobile Payments: Protect Wallets and SoftPOS From Growing Cyber Threats Learn how attackers exploit digital wallets and SoftPOS -- and how you can defend against them. Join our technical webinar on May 13. Reserve your spot today! AI chatbots have barely made a dent in traffic to popular search engine sites over the past two years, according to a study by SEO and backlink services firm OneLittleWeb. The study analyzed global web traffic from April 2023 to March 2025. In the most recent year, chatbot sites accounted for just 2.96% of the visits received by search engines. Between April 2024 and March 2025, search engine traffic declined only slightly — down 0.51% to 1.86 trillion visits — while chatbots saw an 80.92% year-over-year spike in traffic. The modest drop in search traffic suggests that, despite explosive growth, AI chatbots are not yet displacing traditional search behavior in any meaningful way. “Even with ChatGPT’s massive growth, it still sees approximately 26 times fewer daily visits than Google,” wrote the author of the study, Sujan Sarkar, founder of OneLittleWeb. “While AI chatbots are growing fast, search engines continue to hold a dominant position in daily user engagement,” he added. Sarkar also noted that search engines like Google and Microsoft Bing are leveraging AI features like AI Overviews and Search Generative Experience (SGE) to increase traffic in early 2025. “Despite a temporary dip, these integrations have helped revive interest and usage,” he wrote. Why Search Still Outpaces AI Chatbots The length of the study period could be distorting the current chatbot versus search engine picture, contended Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore. “Using AI for search is relatively new, and people haven’t learned how to properly prompt yet, so a survey looking back over two years of data would reflect a world mostly pre-AI search and one where the tools were just starting to be used at the end of the survey period, biasing the results toward poor AI use,” he told TechNewsWorld. “With emerging technologies, surveys need to be more real-time, as the old data won’t reflect current reality,” he said. Search engines also have advantages for drumming up traffic. “Search engines benefit from decades of user trust and deeply ingrained habits,” explained Mark N. Vena, president and principal analyst at SmartTech Research in Las Vegas. “They offer a fast, comprehensive way to find information, products, news, and more — across virtually every category,” he told TechNewsWorld. “Their massive web indexing and ad-supported business models fund constant innovation.” He added, “Integration with browsers and mobile devices as the default search method reinforces daily usage.” Evolution, Not Dissolution The study also maintained that search engines are evolving rather than fading, integrating AI tools to offer a richer, more personalized user experience. At the same time, chatbots are carving out their niche in tasks requiring direct, customized responses. “Search engines are not disappearing; they’re adapting by embedding AI to improve response quality and personalization,” Vena said. “Meanwhile, chatbots thrive in scenarios requiring synthesis, dialogue, or creative output.” “The tools serve different intents,” he continued. “Search is broad and navigational. Chatbots are task-focused and conversational. Their coexistence reflects a diversification of how users seek and process information.” Enderle, though, argued that AI providers are struggling to carve out their niche. “This will change over time, but I’d have thought it would happen sooner than it is happening. I blame the lack of promotion as the cause for the slow pickup,” he said. “There’s some convergence of AI and search happening, as Google integrates more AI and ChatGPT emphasizes and improves search,” added Greg Sterling, co-founder of Near Media, a market research firm in San Francisco. “Search engines aren’t dying, but the old user experience is, with implications for publishers and advertisers,” he told TechNewsWorld. Chatbots Expand Into Companionship, Therapy Alex Ambrose, a policy analyst with the Information Technology and Innovation Foundation, a research and public policy organization in Washington, D.C., agreed that chatbots are attracting users with personalized experiences. “Users are increasingly turning to chatbots not just for tasks like rebooking flights or automating workflows but for companionship and therapeutic applications,” she told TechNewsWorld. “These chatbots — or AI companions — are taking the technology a step further by providing users with extremely personal experiences in the shape of providing emotional support and digging into deeper personal questions,” she said. The study also ranked chatbots by visits. ChatGPT was at the top of the list, followed by DeepSeek, Gemini, Perplexity, Claude, Microsoft Copilot, Blackbox AI, Grok, Monica, and Meta AI. It noted the fastest-growing chatbots were DeepSeek and Grok. DeepSeek experienced a staggering surge in traffic, with total visits jumping from 1.5 million to 1.7 billion during the two-year study period, an increase of 113,007%. Grok’s growth was 353,787%, increasing from 61,200 visits to 216.5 million. “Grok and DeepSeek are growing fast due to strong backing and strategic positioning,” Vena said. “Grok leverages its integration into X — formerly Twitter — giving it immediate reach and visibility,” he explained. “DeepSeek appeals to users looking for more technical, research-oriented AI capabilities. Both benefit from user curiosity about alternatives to ChatGPT and differentiated conversational experiences.” Chatbots vs. Search: The Battle for User Intent Vena contended that the real contest isn’t just about traffic. “It’s about controlling the user’s starting point when they have a question or goal,” he said. “Chatbots may win in productivity or assistance, while search engines still dominate for broad exploration and commerce. Integration and default positioning will shape the future more than features alone. The next wave may involve blended experiences that merge the strengths of both.” Sterling agreed that the simple traffic analysis approach doesn’t tell the whole story about how usage is changing. “As people become more sophisticated about AI, they’re being more discriminating about how to use it versus search,” he noted. “The idea that people either use AI or search is false. Both are being used, but the ways that AI and search are used are evolving.” “More and more research will migrate to AI unless Google co-opts that,” he continued. “Google will probably hold on to most navigational, shopping, and commercial queries.” “Having said that,” Sterling added, “the market is now moving very fast and evolving quickly. Any prediction made today may turn out to be wrong tomorrow.” Enderle pointed out that the market is at the very beginning of this trend. “I expect by 2030 kids will look back at non-AI search engines like they now look back at dial phones, asking, how anyone lived in these dark times,” he predicted. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Search Tech0 Comments 0 Shares
-
Meta Llama 2025: The Open-Source AI TsunamiA wave of disruption is sweeping through AI. Meta’s recent unveiling at LlamaCon 2025 of the roadmap for its Llama family of large language models (LLMs) paints a compelling picture, one where open source isn’t just a preference, but the very engine driving AI’s future. If Meta’s vision comes to fruition, we’re not just looking at incremental improvements; we’re facing an AI tsunami powered by collaboration and accessibility, threatening to wash away the walled gardens of proprietary models. Llama 4: Faster, Multilingual, Vast Context The headline act, Llama 4, promises a quantum leap in capabilities. Speed is paramount, and Meta claims significant acceleration, making interactions feel more fluid and less like waiting for a digital oracle to deliver its pronouncements. But the true game-changer appears to be its multilingual prowess, boasting fluency in a staggering 200 languages. Imagine a world where language barriers in AI interactions become a quaint historical footnote. This level of inclusivity has the potential to democratize access to AI on a truly global scale, connecting individuals regardless of their native tongue. Furthermore, Llama 4 is set to tackle one of the persistent challenges of LLMs: context window limitations. The ability to feed vast amounts of information into the model is crucial for complex tasks, and Meta’s claim of a context window potentially as large as the entire U.S. tax code is mind-boggling. Think of the possibilities for nuanced understanding and comprehensive analysis. The dreaded “needle in a haystack” problem — retrieving specific information from a large document — is also reportedly seeing significant performance improvements, with Meta actively focused on making it even more efficient. This enhanced ability to process and recall information accurately will be critical for real-world applications. Scalability Across Hardware Meta’s strategy isn’t just about building behemoth models; it’s also about making AI accessible across a range of hardware. The Llama 4 family is designed with scalability in mind. “Scout,” the smallest variant, is reportedly capable of running on a single Nvidia H100 GPU, making powerful AI more attainable for individual researchers and smaller organizations. “Maverick,” the mid-sized model, will also operate on a single GPU host, striking a balance between power and accessibility. While the aptly named “Behemoth” will undoubtedly be a massive undertaking, emphasizing smaller yet highly capable models signals a pragmatic approach to widespread adoption. Crucially, Meta touts a very low cost-per-token and performance that often exceeds other leading models, directly addressing the economic barriers to AI adoption. Llama in Real Life: Diverse Applications Llama’s reach extends beyond earthly confines. Its deployment on the International Space Station, providing critical answers without a live connection to Earth, highlights the model’s robustness and reliability in extreme conditions. Back on our planet, real-world applications are already transformative. Sofya, a medical application leveraging Llama, is substantially reducing doctor time and effort, promising to alleviate burdens on healthcare professionals. Kavak, a used car marketplace, is using Llama to provide more informed guidance to buyers, enhancing the consumer experience. Even AT&T is utilizing Llama to prioritize tasks for its internal developers, boosting efficiency within a major corporation. A partnership between Box and IBM, built on Llama, further assures both performance and the crucial element of security for enterprise users. Open, Low-Cost, User-Centric AI Meta aims to make Llama fast, affordable, and open — giving users control over their data and AI future. The release of an API to improve usability is a significant step towards this goal, lowering the barrier to entry for developers. The Llama 4 API promises an incredibly user-friendly experience, allowing users to upload their training data, receive status updates, and generate custom fine-tuned models that can then be run on their preferred AI platform. This level of flexibility and control is a direct challenge to the closed-off nature of some proprietary AI offerings. Tech Upgrades and Community Enhancements Technological advancements are furthering Llama’s capabilities. Implementing speculative decoding reportedly improves token generation speed by around 1.5x, making the models even more efficient. Because Llama is open, the broader AI community is actively contributing to its optimization, with companies like Cerebras and Groq developing their own hardware-specific enhancements. Llama Adds Powerful Visual AI Tools The future of AI, according to Meta, is increasingly visual. The announcement of Locate 3D — a tool that identifies objects from text queries — and continued development of the Segment Anything Model (SAM) — a one-click tool for object segmentation, identification, and tracking — signal a shift toward AI that can truly “see” and understand the world around it. SAM 3, launching this summer with AWS as the initial host, promises even more advanced visual understanding. One highlighted application is the ability to automatically identify all the potholes in a city, showcasing the potential for AI to address real-world urban challenges. Conversational AI in Action Llama’s user-friendly design is already translating into meaningful real-world applications. Comments from Mark Zuckerberg and Ali Ghodsi of Databricks reinforced the shift toward smaller yet more powerful models, accelerated by rapid innovation. Even traditionally complex tools like Bloomberg terminals now respond to natural language queries, eliminating the need for specialized coding. The real-world impact is already evident: the Crisis Text Line uses Llama to assess risk levels in incoming messages — potentially saving lives. Open Source Advantages and Future Challenges Ali Ghodsi emphasized Databricks’ belief in open source, citing its ability to foster innovation, reduce costs, and drive adoption. He also highlighted the growing success of smaller, distilled models that increasingly rival their larger counterparts in performance. The anticipated release of “Little Llama” — an even more compact version than Scout — further underscores the momentum behind this trend. Looking ahead, the focus shifts to safe and secure model distillation — ensuring smaller models don’t inherit vulnerabilities from their larger predecessors. Tools like Llama Guard are early steps in addressing these risks, but more work is needed to maintain quality and security across a growing range of models. One emerging concern is objectivity: open models may recommend a competitor’s product if it’s genuinely the best fit, potentially leading to more honest and user-centric AI. Ultimately, while AI capabilities are advancing rapidly, the real competitive edge lies in data. Encouragingly, as models become more capable, the skills needed to work with them are becoming more accessible. Wrapping Up: Open Source AI’s Rising Power Meta’s Llama 2025 roadmap signals a decisive shift towards open source as the dominant paradigm in AI development. With faster, more multilingual models, a focus on accessibility across various hardware, and a commitment to user control, Meta is unleashing an AI tsunami that promises to democratize the technology and drive unprecedented innovation across industries. The emphasis on real-world applications, from healthcare to education to everyday interactions, underscores the transformative potential of this open and collaborative future of artificial intelligence.0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMThe AI–5G Convergence Is Shaping the Future of TelecomOver several decades as an industry analyst, I have followed the evolution of telecom, wireless, broadband, pay TV, and more. Today, a new transformation is taking shape. Artificial intelligence and 5G wireless — both part of the broader telecom ecosystem — are now converging to create intelligent, self-optimizing networks that are reshaping industries from telecommunications to health care. Innovation has always driven change across every industry, but the pace is accelerating with the rise of technologies like artificial intelligence. Different forms of AI are proliferating, just as 5G and wireless networks have already transformed how we live and work. Now, these forces are beginning to converge — and the impact will be profound. Understanding how AI and 5G are coming together and what this means for customers, workers, investors, and executives is critical as we move into a new era of accelerated change. Let’s pull the camera back for a longer-term, historical perspective on the forces rewriting everything around us — and why companies must move quickly to keep up. AI, 5G, Wireless, and Telecom Are Merging The blending of AI, 5G, wireless, and telecom is already reshaping industries — but what exactly does it mean, and what changes are occurring today and coming next? While different forms of artificial intelligence have existed for decades, the sector has advanced rapidly over the past two years, accelerated by the launch of OpenAI’s ChatGPT and other generative AI technologies. It’s important to understand that AI is not just one type of technology. It is an umbrella term covering different categories and applications. This is where much of the confusion begins. Think of AI as a top-level catchphrase. Underneath it are different types, such as Narrow AI (task-specific intelligence), the still-theoretical General AI (broad human-like intelligence), and the future concept of Super AI (intelligence beyond human capability). Within Narrow AI, we find applications like ChatGPT and other forms of Generative AI, which are rapidly transforming business and consumer experiences. AI’s Growing Impact Across Connectivity Sectors Artificial intelligence is not a single technology but a broad term covering many types, each affecting different industries and companies in distinct ways. Understanding these differences is critical because AI is evolving rapidly and rewriting the rules across every sector it touches. Some companies are acting quickly to integrate AI. These early movers gain a first-to-market advantage. Others are taking a wait-and-see approach, positioning themselves as fast followers — ready to act once the industry’s direction becomes clearer. Both strategies — leading and fast following — can succeed, but they represent very different paths to growth. We’ve seen a similar dynamic before in the wireless industry. AT&T, for example, is often first to market with new technologies — sometimes right, sometimes wrong. Verizon tends to move in once the path is better defined, succeeding as a fast follower. AI Is Driving New Growth Opportunities Artificial intelligence gives companies a new opportunity to reverse a decline and drive growth. AI is already transforming network operations in wireless, telecom, and broadband. Traditionally, service disruptions triggered a scramble to locate, isolate, and fix the problem before customers noticed. Today, AI can detect potential issues early, reroute traffic automatically, and keep networks running smoothly, often without noticeable disruption. This shift is more than a technical upgrade; it’s a competitive advantage. Companies integrating AI into their operations can deliver better service, improve customer satisfaction, and position themselves for renewed growth. While we are still in the early stages of the AI revolution, the companies that move first will be better positioned to shape the next phase of industry transformation. Leaders Emerging in the AI and 5G Race The race to lead the next wave of AI and 5G innovation is already underway — and it’s crowded. Major telecom carriers like AT&T, T-Mobile, Verizon, and Comcast are racing to integrate AI and 5G into their networks. Infrastructure providers like Cisco, Nokia, Ericsson, and Qualcomm are building the technology backbone to support these shifts. Meanwhile, device makers such as Apple, Google, Samsung, and Netgear are embedding AI into products that will drive consumer adoption. As the landscape evolves, we can expect a mix of strategies. Some companies will remain independent, while others will form partnerships or pursue mergers and acquisitions to strengthen their positions. However, leadership in the AI era won’t be limited to the largest companies. Even smaller players are finding ways to lead by moving quickly with AI-driven innovation. For instance, RedChip recently introduced RedChat, an AI-based investment service designed to help investors evaluate and select small-cap stocks. This offers a fresh example of how AI is opening new growth paths across industries, not just in telecom and wireless. In this rapidly shifting environment, size alone won’t determine success. The next generation of winners will be defined by agility, innovation, and strategic execution. The Urgent Race to Master AI The AI landscape is changing rapidly — and no one fully understands the entire picture. Most companies specialize in one aspect of AI, but few grasp the full scope of what’s unfolding. It’s a confusing but opportunity-filled moment for those who choose the right path forward. Just two years ago, even senior executives at leading companies underestimated how quickly generative AI tools like ChatGPT would reshape the technology landscape — and some even blocked employees from using them. We know far more today than we did two years ago — and far less than we will tomorrow. That’s the excitement and the risk AI brings: new opportunities, new dangers, and a dramatically faster pace of change. Executives who move quickly will define the future — those who hesitate risk being left behind.0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMThe Algorithmic Tightrope and the Perils of Big Tech’s Dominance in AIThe rapid proliferation of artificial intelligence is both exhilarating and deeply concerning. The sheer power unleashed by these algorithms, largely concentrated within the coffers and control of a handful of tech behemoths — you know, the usual suspects, the ones who probably know what you had for breakfast — has ignited a global debate about the future of innovation, fairness, and even societal well-being. The ongoing scrutiny and the looming specter of regulatory intervention are not merely bureaucratic hurdles; they are a necessary reckoning with the profound risks inherent in unchecked AI dominance. It’s like we’ve given a few toddlers the keys to a nuclear-powered Lego set, and now we’re all nervously watching to see what they build (or break). Let’s talk about how AI algorithms are reshaping society, who controls them, and why the stakes are far higher than most people realize. Then, we’ll close with my Product of the Week: a new Wacom tablet I use to put my real signature on digital documents. Bias Risks in AI: Intentional and Unintentional The concentration of AI development and deployment within a few powerful tech companies creates a fertile ground for the insidious growth of both intentional and unintentional bias. Intentional bias, though perhaps less overt (think of it as a subtle nudge in the algorithm’s elbow), can creep into the design and training of AI models when the creators’ perspectives or agendas, whether conscious or subconscious, shape the data and algorithms. This can manifest in subtle ways, prioritizing certain demographics or viewpoints while marginalizing others. For instance, if the teams building these models lack diversity, their lived experiences and perspectives might inadvertently lead to skewed outcomes. It’s like asking a room full of cats to design the perfect dog toy. However, the more pervasive and perhaps more dangerous threat lies in unintentional bias. AI models learn from the data they are fed. If that data reflects existing societal inequalities (because humanity has a history of not being entirely fair), AI will inevitably perpetuate and even amplify those biases. Facial recognition software, notoriously less accurate for individuals with darker skin tones, is a stark example of how historical and societal biases embedded in training data can lead to discriminatory outcomes in real-world applications, from law enforcement to everyday convenience. The sheer scale at which these dominant tech companies deploy their AI systems means these biases can have far-reaching and detrimental consequences, impacting access to opportunities, fair treatment, and even fundamental rights. It’s like teaching a parrot to repeat all the worst things you’ve ever heard. Haste Makes Waste, Especially When Algorithms Are Involved Adding to these concerns is the relentless pressure within these tech giants to prioritize productivity and rapid deployment over the crucial considerations of quality and accuracy. In the competitive race to be the first to market with the latest AI-powered feature or service (because who wants to be the Blockbuster of the AI era?), the rigorous testing, validation, and refinement processes essential to ensuring reliable and trustworthy AI are often sidelined. The “move fast and break things” ethos, while perhaps acceptable in earlier stages of software development, carries significantly higher stakes when applied to AI systems that increasingly influence critical aspects of our lives. It’s like releasing a self-driving car that’s only been tested in a parking lot. The consequences of prioritizing speed over accuracy can be severe. Imagine an AI-powered medical diagnosis tool that misdiagnoses patients due to insufficient training on diverse datasets or inadequate validation, leading to delayed or incorrect treatment. Or consider an AI-powered hiring algorithm that, optimized for speed and volume, systematically filters out qualified candidates from underrepresented groups based on biased training data. The drive for increased productivity, fueled by the immense resources and market pressure these dominant tech companies face, risks creating an ecosystem of AI that is efficient but fundamentally flawed and potentially harmful. It’s like trying to win a race with a car that has square wheels. Ethical Oversight Lags in AI Governance Perhaps the most alarming aspect of the current AI landscape is the relative lack of robust ethical oversight within these powerful tech organizations. While many companies espouse ethical AI principles (usually found somewhere on page 78 of their terms of service), implementing and enforcing these principles often lag far behind the rapid advancements in the technology itself. The decision-making processes within these companies regarding the development, deployment, and governance of AI systems are often opaque, lacking independent scrutiny or clear mechanisms for accountability. The absence of strong ethical frameworks and independent oversight creates a vacuum where potentially harmful AI applications can be developed and deployed without adequately considering their societal impact. The pressure to innovate and monetize AI can easily overshadow ethical considerations, allowing harmful outcomes — such as bias, privacy violations, or erosion of human autonomy — to go unaddressed until after damage is already done. The sheer scale and influence of these dominant tech companies necessitate a far more rigorous and transparent approach to ethical AI governance. It’s like letting a toddler paint the Mona Lisa. The results are likely to be abstract and possibly involve glitter. Building a Responsible AI Future The risks inherent in the unchecked dominance of AI by a few large tech companies are too significant to ignore. A multi-pronged approach is needed to foster a more responsible and equitable AI ecosystem. Stronger regulation is a critical starting point. Governments must move beyond aspirational guidelines and establish clear, enforceable rules that directly address the risks posed by AI — bias, opacity, and harm among them. High-stakes systems should face rigorous validation, and companies must be held accountable for the consequences of flawed or discriminatory algorithms. Much like the GDPR shaped data privacy norms, new legislation — call it AI-PRL, for AI Principles and Rights Legislation — should enshrine basic protections in algorithmic decision-making. Open-source AI development is another key pillar. Encouraging community-driven innovation through platforms like AMD’s ROCm helps break the grip of closed ecosystems. With the proper support, open AI projects can democratize development, enhance transparency, and broaden who gets a say in AI’s direction — like opening the recipe book to every cook in the kitchen. Fostering independent ethical oversight is paramount. Creating ethics boards with the authority to audit and advise on AI deployment — particularly at dominant firms — can introduce meaningful checks. Drawing from diverse disciplines, these bodies would help companies uphold ethical standards rather than self-regulate in the shadows. Think of them as the conscience of the industry. Mandating transparency and explainability in AI algorithms is essential for building trust and enabling accountability. Users and regulators alike need to understand how AI systems arrive at their decisions, particularly in high-stakes contexts. Requiring companies to provide clear and accessible explanations of their algorithms while protecting legitimate trade secrets can help identify and address potential biases and errors. It’s like asking the Magic 8 Ball to show its workings. Finally, investing in AI literacy and public education is crucial for empowering individuals to understand AI’s capabilities and limitations, as well as its potential risks and benefits. A more informed public will be better equipped to engage in the societal debates surrounding AI and demand greater accountability from the companies that develop and deploy these powerful technologies. Wrapping Up: Charting a Course for Responsible AI The algorithmic tightrope we are currently walking demands careful and deliberate steps. The immense potential of AI must be harnessed responsibly, with a keen awareness of the risks inherent in unchecked power. By implementing robust regulations, fostering open-source alternatives, mandating ethical oversight and transparency, and investing in public education, we can strive towards an AI ecosystem that benefits all of society, rather than exacerbating existing inequalities and concentrating power in the hands of a few. The future of AI, and indeed a significant part of our own future, depends on our collective willingness to navigate this algorithmic tightrope with wisdom, foresight, and a commitment to ethical innovation. One by Wacom Image Credit: Wacom In a world increasingly dominated by digital documents, the simple act of a handwritten signature can feel like a quaint relic. Enter the One by Wacom small graphics tablet, a surprisingly affordable portal to bridge the analog and digital, especially for those of us (read: me) tired of fumbling with a mouse to create a digital scrawl that vaguely resembles our John Hancock. Priced at a tempting $39.94 on Amazon (for the wired version), this little slate offers a far more natural way to “sign here” without resorting to pre-saved images that lack that personal touch. For my primary use case — imprinting my actual, legible (virtually none of the time) signature onto digital contracts and forms — One by Wacom is a genuine game-changer. Gone are the jagged lines and shaky approximations of my name. Instead, the pressure-sensitive pen glides smoothly across the tablet’s surface, translating my familiar loops and flourishes onto the screen with surprising accuracy. Interestingly, Wacom offers this petite powerhouse in wired and Bluetooth wireless flavors. The wireless version, while liberating from the tyranny of cables, will set you back a heftier $79.94. While the freedom of a wireless setup is alluring, especially for cluttered desks (guilty!), the wired version arguably presents a better value proposition. No charging anxieties, no pairing woes — just plug it in and get signing. For a tool primarily used for quick tasks like signatures, the tethered existence feels like a small price to pay for perpetual power. But the One by Wacom is more than a fancy digital autograph machine. This versatile gadget opens up a surprising array of creative possibilities. Aspiring digital artists can use it for basic sketching and drawing, enjoying a more intuitive experience than a mouse allows. Photo editors can leverage the pen’s pressure sensitivity for more precise retouching and masking. Even navigating your computer can become a slightly more artistic endeavor, though perhaps less efficient than a traditional mouse for everyday tasks. Think of it as adding a touch of flair to the mundane. While the “small” in its product description is accurate, the active drawing area is perfectly adequate for signatures and basic creative work. It’s portable enough to tuck into a laptop bag, making it a handy tool for on-the-go professionals who need to sign documents remotely or sketch ideas wherever inspiration strikes. The One by Wacom small tablet is a surprisingly capable and affordable tool. For anyone seeking a more natural way to sign digital documents, the wired version at under $40 is a no-brainer. While it might not replace a dedicated graphics tablet for serious artists, its versatility extends to basic drawing and photo editing, offering a fun and intuitive alternative to the humble mouse. It’s a small investment that can make a big difference in your digital workflow — finally allowing your signature to have the personality it deserves, even in the cold, hard world of cyberspace — making the One by Wacom tablet my Product of the Week.0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMSMBs Face Costly, Complex Barriers to CybersecurityCybersecurity threats are escalating, yet many small and medium-sized businesses (SMBs) remain dangerously unprotected. Experts point to a combination of high costs, technical complexity, and a shortage of skilled cybersecurity professionals as key reasons why SMBs often delay or avoid adopting critical protections. Even businesses willing to invest face challenges navigating a crowded and confusing marketplace of tools and services. This lack of preparedness has made SMBs increasingly attractive targets for cybercriminals seeking easy access to valuable data and networks. In September, the FBI, with intelligence support from cybersecurity researchers, disrupted a major Chinese botnet that infected more than 200,000 consumer devices worldwide — a reminder of how rapidly cyberthreats are evolving. Attacks against businesses of all sizes, including SMBs, have surged in recent years, increasing the stakes for those without strong defenses. Several factors have contributed to the growing cybersecurity risks facing SMBs. The swift transition to remote work expanded the number of attack surfaces. Digital transformation efforts and large-scale cloud service adoption by smaller firms have also created larger security gaps. Increased reliance on third-party vendors and supply chains has further amplified these vulnerabilities, observed Jerry Chen, former Cisco engineer and co-founder of cybersecurity firm Firewalla. “The uptick in attacks against SMBs is part of a broader trend where cybercriminals focus on relatively unprotected devices to build large-scale botnets capable of launching distributed denial-of-service [DDoS] attacks,” he told TechNewsWorld. Cyberattacks Against SMBs Surge Chen noted that small business networks are desirable targets for attackers because they are especially vulnerable. They often lack the budget to protect themselves with effective security tools and the resources to hire appropriate cybersecurity talent. Protection tradeoffs are a necessary reality for SMBs, he noted. Smaller businesses can’t have the same cybersecurity measures as enterprises. Those systems are costly and cannot be operated without an IT team to manage them. SMBs need cost-effective solutions that bring visibility to their networks. That means a solution that can always alert them to how many devices are on their network, what they are doing, and whether they are transferring any sensitive data. “Maintaining this kind of vigilance is the only way to know if there is any risk,” Chen asserted. Steve Garrison, SVP of marketing at Stellar Cyber, agrees that hackers realize there are millions of companies in the SMB and mid-market small and medium-sized enterprise (SME) space. “There is less sophistication in terms of cyber awareness in the SMB space. So bad people are just starting to cash in on it,” he told TechNewsWorld. It’s a numbers game, and it’s easier than ever for hackers to use your devices as a door to get your or the company’s personal information. Nobody provides a desktop phone anymore, Garrison noted. SMBs Struggle With Outdated Cyber Tools When businesses struggle with cyber resources, they often adopt outdated technology or basic security measures that leave them vulnerable. Most small business operators do not know how to implement best practices. “This can lead to misconfigurations, weak passwords, unpatched software, or poor incident response planning, all of which increase the risk of a breach,” said Chen. This situation plays directly into hackers’ hands. Cybercriminals are conscious of all these factors and target small businesses precisely because they are seen as easy targets. “Small businesses may not believe they are at risk. The ‘it won’t happen to me’ mindset makes them less likely to adopt strong defenses or even implement basic security measures,” Chen offered. “For them, the best solution is making use of tools that are within their budget, effective, and also very simple to work with.” For example, SMBs can use a firewall device that provides comprehensive visibility of the devices operating within their network, their continuously updated activity, and gives them the ability to manage and group them. Not an outdated cyber tool, firewalls with those features can be a good stopgap measure. Basic Cybersecurity Practices for SMBs Chen added that budgetary deficiencies and absent IT workers do not mean companies are defenseless. A system scan tool can investigate the security of an SMB’s network for commonly used ports and vulnerabilities. “An effective scan tool will detect issues such as services that do not have password protection or services that may have a default password or a common password that would be simple for a hacker to guess,” he said. Many SMBs fail to take other basic security steps. On the network visibility side, they should set up controls or rules within their network, such as segmenting essential devices, like work laptops and security cameras, from less critical ones, like personal phones and guest devices. They should also consider isolating IoT devices, which are common points of entry for hackers, to reduce the risk of security breaches or applying an extra layer of protection to guest devices. Other steps include blocking traffic from certain countries, geographic regions, and dangerous websites from accessing the network. Two basic systems help with these tasks: Intrusion Detection Systems (IDS) monitor network traffic and alert administrators to suspicious activity, vulnerabilities, or policy violations, but they do not block threats directly. Intrusion Prevention Systems (IPS) detect and automatically block malicious traffic in real time, preventing harmful connections from reaching the network or causing damage. Combining IDS and IPS strengthens network security by pairing real-time alerts with automatic threat blocking, minimizing risks without heavy hands-on management. “On the device side, SMBs should always make sure that they have the latest firmware updates and fixes ready to implement and use devices that are actively supported by their manufacturers,” Chen urged. Hidden Cybersecurity Features SMBs Can Leverage Some cybersecurity features are built into platforms but may not be evident to buyers. Communications and networking platforms may already have built-in cyber protections, eliminating the need to add third-party software, noted Stellar Cyber’s Garrison. Its Open XDR platform is a good example. The company does not market to individual SMBs. It delivers cyber protection through platform providers. XDR (Extended Detection and Response) systems are evolving technologies that unify threat prevention, detection, and response capabilities. According to Garrison, Stellar Cyber’s Open XDR emphasizes open functionality rather than relying solely on open-source components. It uses a combination of open hooks, webhooks, APIs, connectors, and parsers to link tools the customer has already purchased to ingest companies’ data and fill in the rest of the attack surface for what they do not have. “It’s a one-size-fits-all platform,” he added. “It is a different strategy than your larger [cyber] companies would offer. So it’s another way we go to market to make it more amenable to adopt the technology.” Affordable Next-Gen Firewalls for SMBs Firewalla offers a series of enhanced firewall and router devices designed to protect networks and devices at home and at work. One of the newer models, the Firewalla Gold Pro, can route and inspect network traffic at 10-gigabit speeds and supports Wi-Fi 7, making it well-suited for the fastest networks in small businesses and homes. The Firewalla Gold Pro is a multi-gigabit firewall that is easy to install, simple to use, and requires no monthly fees. It is operated through a smartphone app with various features, such as a vulnerability scanner, an IDS/IPS system to detect and block unauthorized access attempts, and tools for managing and grouping connected devices across the network. Chen said the Firewalla Gold Pro provides the same high-quality performance as enterprise-level security solutions but offers more value due to its cost-effectiveness and technical accessibility. “It is ideal for not only SMBs looking to gain basic visibility, control, and protection of their networks but also for SMBs upgrading their infrastructures and adopting next-generation networks like Wi-Fi 7 for faster speeds,” he added. The Firewalla Gold Pro device is powered by a quad-core Intel processor and 8 gigabytes of RAM, allowing it to scale with growing network demands. Its 10-gigabit ports can be configured for wide-area networks (WANs) or local area networks (LANs). Users can segment traffic using virtual local area networks (VLANs), separating devices into groups for better management and security, all operating at full 10-gigabit speeds. One port can connect to a 10-gigabit Wi-Fi 7 access point and another to a high-speed switch.0 Comments 0 Shares
-
WWW.TECHNEWSWORLD.COMGoogle AI Overviews Hurting Click-Through Rates: StudiesGoogle AI Overviews Hurting Click-Through Rates: Studies By John P. Mello Jr. April 23, 2025 5:00 AM PT Studies show Google's AI Overviews reduce clicks to websites by answering queries directly, raising concerns over traffic and visibility. ADVERTISEMENT Proven Tactics to Scale SMB Software Companies in Competitive Markets Gain market share, boost customer acquisition, and improve operational strength. Get the SMB Software Playbook for Expansion & Growth now -- essential reading for growing tech firms. Free Download. Two studies released last week indicate that Google’s AI Overviews are having a negative impact on click-through rates from online searches, which could ultimately reduce traffic to original sources and affect the quality of content on the internet. An analysis of 300,000 keywords by Xibeijia Guan, a data scientist at SEO tool provider Ahrefs, found that the presence of an AI Overview in search results correlated to a 34.5% lower average click-through rate (CTR) for the top-ranking page compared to keywords without an Overview. “This isn’t surprising. I have seen anecdata suggesting that some websites have seen clicks reduce by 20% to 40% since the rollout of AI overviews,” Ryan Law wrote in an Ahrefs blog on April 17. He explained that AI Overviews function in a similar way to Featured Snippets. They try to resolve the searcher’s query directly, which likely contributes to more zero-click searches. “And although AI Overviews often contain citation links, there can be many of these links cited, making it less likely for any single link to earn the lion’s share of clicks,” he wrote. “Assuming AI Overviews stay in this current form, this is also likely the highest the CTR will be. As the novelty wears off and the law of shitty click-throughs kicks in, I would expect to see clicks reduce further,” he noted. Brand Queries See CTR Lift Meanwhile, in a study of 700,000 keywords, performance agency Amsive found keywords that triggered an Overview had an average click-through decline of 15.49%. The study also noted that while only a small percentage (4.79%) of branded keywords generated an Overview, those that did had an average click-through rate increase of 18.68%. By contrast, non-branded keywords that generated an Overview had an average click-through rate drop of 19.98%. “It’s no surprise that branded keywords are getting more clicks than non-branded keywords,” said Greg Sterling, co-founder of Near Media, a market research firm in San Francisco. “The vast majority of AI Overviews trigger when the user does an ‘informational’ search — people looking for general information,” he told TechNewsWorld. “When somebody uses a branded keyword, there’s a higher degree of intent and thus a higher CTR. There’s nothing new in this.” Ben James, founder of 404, Bittensor Subnet 17, an online 3D content creation company, explained that non-branded keywords typically drive discovery and surface diverse viewpoints. “If AI Overviews are disproportionately reducing CTR for those terms, it reinforces concerns that Overviews consolidate traffic around known brands or Google’s own properties — shrinking the opportunity space for independent publishers and startups,” he told TechNewsWorld. However, the researchers’ findings seemed nonintuitive to JD Harriman, a partner at the Foundation Law Group in Burbank, Calif. “It is not clear what searchers are doing when they see an AI Overview of a branded word,” he told TechNewsWorld. “It is likely that the searcher wanted to get to the branded site in the first place, and goes ahead and clicks through, possibly ignoring the content of the Overview and just getting where they wanted to be anyway.” Fewer but Higher Quality Clicks? “There have been several studies done by different companies that show the same thing, so this was not a surprise to me at all,” said Chris Ferris, senior vice president of digital strategy at Pierpont Communications, a public relations agency in Houston. “When AI overviews are present on search engine results pages, the click-through rate for the organic stuff falls anywhere from 35% to 70%,” he told TechNewsWorld. “This makes sense because Google has been increasingly stuffing stuff at the top of their results pages, which is pushing down the organic results, which means fewer people see them, which is depressing the click-through rate,” he explained. While click-through rates might decline on pages with Overviews, Google contends that the quality of the clicks on the page improves. “We see the clicks are of higher quality because they’re not clicking on a web page, realizing it wasn’t what they want and immediately bailing. So, they spend more time on those sites,” Head of Google Search Elizabeth Reid told the Financial Times in an interview published April 14. “The studies don’t directly disprove Google’s claim about ‘high-quality clicks,’ but they show that AI Overviews likely reduce visibility and traffic for many sites, especially on non-branded, informational queries,” said Danny Goodwin, editorial director of Search Engine Land & SMX, a digital marketing and advertising technology publication. Goodwin pointed out some deficiencies in the studies. “Neither study examined whether the pages cited in AI Overviews got more clicks than they would have in a traditional search listing,” he told TechNewsWorld. He added that Google’s concept of “high-quality clicks” is vague and unverifiable outside of Google itself. A Google spokesperson was not immediately available to comment on this story. Antitrust Red Flags For some tech watchers, the studies’ results reinforce the notion that Overviews are harmful to quality content on the web. “By answering specific queries within the search interface, AI Overviews reduce the incentive to click through to original sources, diminishing traffic and weakening the feedback loop that sustains quality content creation,” said Mark N. Vena, president and principal analyst at SmartTech Research in Las Vegas. “If original content creators lose traffic, and I do fear this, monetization becomes harder, which may lower the quantity and quality of web content,” he told TechNewsWorld. “Long-term, this could degrade the richness of information available on the open web and increase dependence on platform-controlled summaries.” Rob Enderle, president and principal analyst with the Enderle Group, an advisory services firm in Bend, Ore., added that it’s intuitively obvious why Overviews will reduce the quality of web content. “Few people who buy CliffNotes read the source material because the notes fulfill the core needs of getting the gist of the content,” he told TechNewsWorld. “In reports, most only read the executive summary, as well. If you don’t need to review the full detail, why bother unless you have a unique need for a deeper understanding?” Study findings on the impact of Overviews on website traffic could play a role in antitrust actions against Google. “Google has been under scrutiny for a while now for self-preferencing its own products and services in search,” Goodwin said. “AI Overviews are another big and bold example of Google preferencing itself, many times at the expense of content creators.” “I, and many others, would love to see actual data about how much traffic Google is actually taking for itself via AI Overviews compared to the open web,” he noted. Sterling agreed. “I do think that AI Overviews will have an impact on antitrust considerations,” he said. “AI Overviews can be seen as a form of Google self-preferencing, which will create problems for Google in Europe under the Digital Markets Act. In the U.S., the decline of clicks will reinforce perceptions of Google as a monopolist hoarding traffic.” AI Overviews Reflect Market Pressure However, Jennifer Huddleston, a technology policy research fellow at the Cato Institute, a Washington, D.C. think tank, argued that companies, including Google, continue to innovate to respond to changing consumer expectations and demands, including better ways to display search content such as AI Overviews. “The use of AI Overviews illustrates how AI is changing the nature of search, and consumers may be increasingly expecting more generative AI-type results to their queries,” she told TechNewsWorld. “While the court rejected the emergence of AI as changing the market in this case,” she noted, “the continued improvements of the product indicate how the company must respond to pressures from other market leaders and consumers, a behavior that would not be necessary in a true monopoly situation.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Search Tech0 Comments 0 Shares
More Stories