


ECT News Network delivers e-commerce and technology news, reviews, and analyses on The E-Commerce Times, TechNewsWorld, CRM Buyer, and LinuxInsider. Stay informed by subscribing to our newsletters: https://www.ectnews.com/about/newsletters
199 people like this
98 Posts
2 Photos
0 Videos
-
IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
By John P. Mello Jr.
June 11, 2025 5:00 AM PT
IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
Enterprise IT Lead Generation Services
Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.
IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
“IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
“They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
Realistic Roadmap
Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
“Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
“Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
“IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
“IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
“That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
Solving the Quantum Error Correction Puzzle
To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
“Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.
Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
According to IBM, a practical fault-tolerant quantum architecture must:
Suppress enough errors for useful algorithms to succeed
Prepare and measure logical qubits during computation
Apply universal instructions to logical qubits
Decode measurements from logical qubits in real time and guide subsequent operations
Scale modularly across hundreds or thousands of logical qubits
Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources
Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
“Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
Q-Day Approaching Faster Than Expected
For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
“This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
“IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”
“Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
“It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
“It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
“Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
“It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in Emerging Tech
#ibm #plans #largescale #faulttolerant #quantumIBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum0 Comments ·0 Shares ·0 Reviews -
From Networks to Business Models, AI Is Rewiring Telecom
Artificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services.
As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry.
Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental.
AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint.
AI Is Reshaping Wireless Networks Already
Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack.
AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models.
Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time.
AI Acceleration Will Outpace Past Tech Shifts
Many may underestimate the speed and magnitude of AI-driven change.
The shift from traditional voice and data systems to AI-driven network intelligence is already underway.
Although predictions abound, the true scope remains unclear.
It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise.
Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board.
Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined.
History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries.
Technological shifts bring both new opportunities and complex trade-offs.
AI Disruption Will Move Faster Than Ever
The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed.
Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway.
As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other.
Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward.
AI Will Reshape All Sectors and Companies
This shift will unfold faster than most organizations or individuals are prepared to handle.
Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries.
Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption.
Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage.
As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name.
It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries.
SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption.
The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers.
Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives.
No Industry Is Immune From AI Disruption
AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp.
New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent.
Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets.
The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow.
#networks #business #models #rewiring #telecomFrom Networks to Business Models, AI Is Rewiring TelecomArtificial intelligence is already rewriting the rules of wireless and telecom — powering predictive maintenance, streamlining network operations, and enabling more innovative services. As AI scales, the disruption will be faster, deeper, and harder to reverse than any prior shift in the industry. Compared to the sweeping changes AI is set to unleash, past telecom innovations look incremental. AI is redefining how networks operate, services are delivered, and data is secured — across every device and digital touchpoint. AI Is Reshaping Wireless Networks Already Artificial intelligence is already transforming wireless through smarter private networks, fixed wireless access, and intelligent automation across the stack. AI detects and resolves network issues before they impact service, improving uptime and customer satisfaction. It’s also opening the door to entirely new revenue streams and business models. Each wireless generation brought new capabilities. AI, however, marks a more profound shift — networks that think, respond, and evolve in real time. AI Acceleration Will Outpace Past Tech Shifts Many may underestimate the speed and magnitude of AI-driven change. The shift from traditional voice and data systems to AI-driven network intelligence is already underway. Although predictions abound, the true scope remains unclear. It’s tempting to assume we understand AI’s trajectory, but history suggests otherwise. Today, AI is already automating maintenance and optimizing performance without user disruption. The technologies we’ll rely on in the near future may still be on the drawing board. Few predicted that smartphones would emerge from analog beginnings—a reminder of how quickly foundational technologies can be reimagined. History shows that disruptive technologies rarely follow predictable paths — and AI is no exception. It’s already upending business models across industries. Technological shifts bring both new opportunities and complex trade-offs. AI Disruption Will Move Faster Than Ever The same cycle of reinvention is happening now — but with AI, it’s moving at unprecedented speed. Despite all the discussion, many still treat AI as a future concern — yet the shift is already well underway. As with every major technological leap, there will be gains and losses. The AI transition brings clear trade-offs: efficiency and innovation on one side, job displacement, and privacy erosion on the other. Unlike past tech waves that unfolded over decades, the AI shift will reshape industries in just a few years — and that change wave will only continue to move forward. AI Will Reshape All Sectors and Companies This shift will unfold faster than most organizations or individuals are prepared to handle. Today’s industries will likely look very different tomorrow. Entirely new sectors will emerge as legacy models become obsolete — redefining market leadership across industries. Telecom’s past holds a clear warning: market dominance can vanish quickly when companies ignore disruption. Eventually, the Baby Bells moved into long-distance service, while AT&T remained barred from selling local access — undermining its advantage. As the market shifted and competitors gained ground, AT&T lost its dominance and became vulnerable enough that SBC, a former regional Bell, acquired it and took on its name. It’s a case study of how incumbents fall when they fail to adapt — precisely the kind of pressure AI is now exerting across industries. SBC’s acquisition of AT&T flipped the power dynamic — proof that size doesn’t protect against disruption. The once-crowded telecom field has consolidated into just a few dominant players — each facing new threats from AI-native challengers. Legacy telecom models are being steadily displaced by faster, more flexible wireless, broadband, and streaming alternatives. No Industry Is Immune From AI Disruption AI will accelerate the next wave of industrial evolution — bringing innovations and consequences we’re only beginning to grasp. New winners will emerge as past leaders struggle to hang on — a shift that will also reshape the investment landscape. Startups leveraging AI will likely redefine leadership in sectors where incumbents have grown complacent. Nvidia’s rise is part of a broader trend: the next market leaders will emerge wherever AI creates a clear competitive advantage — whether in chips, code, or entirely new markets. The AI-driven future is arriving faster than most organizations are ready for. Adapting to this accelerating wave of change is no longer optional — it’s essential. Companies that act decisively today will define the winners of tomorrow. #networks #business #models #rewiring #telecom0 Comments ·0 Shares ·0 Reviews -
Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
By John P. Mello Jr.
June 3, 2025 5:00 AM PT
ADVERTISEMENT
Quality Leads That Turn Into Deals
Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now.
Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research.
Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel.
Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature.
It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure.
Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained.
“Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore.
“They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.”
Sensor Magic
“The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La.
“In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.”
Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld.
“Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.”
“In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.”
Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed.
Better Data Collection
The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems.
“Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.”
“The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added.
Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld.
“As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said.
Drone Adoption Faces Regulatory Hurdles
While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization.
Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities.
“Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.”
“Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld.
“For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.”
Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.”
“We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in Emerging Tech
#drones #set #deliver #benefits #laborintensiveDrones Set To Deliver Benefits for Labor-Intensive Industries: ForresterDrones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #drones #set #deliver #benefits #laborintensive -
IT Pros ‘Extremely Worried’ About Shadow AI: Report
IT Pros ‘Extremely Worried’ About Shadow AI: Report
By John P. Mello Jr.
June 4, 2025 5:00 AM PT
ADVERTISEMENT
Enterprise IT Lead Generation Services
Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.
Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday.
The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint.
“As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report.
“Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld.
Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval.
“Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained.
“The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.”
Shadow AI Introduces Security Blind Spots
Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
“Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.”
“The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif.
“Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.”
Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways.
“Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.”
Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said.
“Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.”
Shadow AI Everywhere and Easy To Use
Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees.
“In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.”
“That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.”
Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.”
“These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.”
“The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif.
“Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.”
Banning AI Tools Backfires
To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned.
“Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.”
In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said.
“It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.”
“Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.”
He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved.
“Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in IT Leadership
#pros #extremely #worried #about #shadowIT Pros ‘Extremely Worried’ About Shadow AI: ReportIT Pros ‘Extremely Worried’ About Shadow AI: Report By John P. Mello Jr. June 4, 2025 5:00 AM PT ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. Shadow AI — the use of AI tools under the radar of IT departments — has information technology directors and executives worried, according to a report released Tuesday. The report, based on a survey of 200 IT directors and executives at U.S. enterprise organizations of 1,000 employees or more, found nearly half the IT proswere “extremely worried” about shadow AI, and almost all of themwere concerned about it from a privacy and security viewpoint. “As our survey found, shadow AI is resulting in palpable, concerning outcomes, with nearly 80% of IT leaders saying it has resulted in negative incidents such as sensitive data leakage to Gen AI tools, false or inaccurate results, and legal risks of using copyrighted information,” said Krishna Subramanian, co-founder of Campbell, Calif.-based Komprise, the unstructured data management company that produced the report. “Alarmingly, 13% say that shadow AI has caused financial or reputational harm to their organizations,” she told TechNewsWorld. Subramanian added that shadow AI poses a much greater problem than shadow IT, which primarily focuses on departmental power users purchasing cloud instances or SaaS tools without obtaining IT approval. “Now we’ve got an unlimited number of employees using tools like ChatGPT or Claude AI to get work done, but not understanding the potential risk they are putting their organizations at by inadvertently submitting company secrets or customer data into the chat prompt,” she explained. “The data risk is large and growing in still unforeseen ways because of the pace of AI development and adoption and the fact that there is a lot we don’t know about how AI works,” she continued. “It is becoming more humanistic all the time and capable of making decisions independently.” Shadow AI Introduces Security Blind Spots Shadow AI is the next step after shadow IT and is a growing risk, noted James McQuiggan, security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla. “Users use AI tools for content, images, or applications and to process sensitive data or company information without proper security checks,” he told TechNewsWorld. “Most organizations will have privacy, compliance, and data protection policies, and shadow AI introduces blind spots in the organization’s data loss prevention.” “The biggest risk with shadow AI is that the AI application has not passed through a security analysis as approved AI tools may have been,” explained Melissa Ruzzi, director of AI at AppOmni, a SaaS security management software company, in San Mateo, Calif. “Some AI applications may be training models using your data, may not adhere to relevant regulations that your company is required to follow, and may not even have the data storage security level you deem necessary to keep your data from being exposed,” she told TechNewsWorld. “Those risks are blind spots of potential security vulnerabilities in shadow AI.” Krishna Vishnubhotla, vice president of product strategy at Zimperium, a mobile security company based in Dallas, noted that shadow AI extends beyond unapproved applications and involves embedded AI components that can process and disseminate sensitive data in unpredictable ways. “Unlike traditional shadow IT, which may be limited to unauthorized software or hardware, shadow AI can run on employee mobile devices outside the organization’s perimeter and control,” he told TechNewsWorld. “This creates new security and compliance risks that are harder to track and mitigate.” Vishnubhotla added that the financial impact of shadow AI varies, but unauthorized AI tools can lead to significant regulatory fines, data breaches, and loss of intellectual property. “Depending on the scale of the agency and the sensitivity of the data exposed, the costs could range from millions to potentially billions in damages due to compliance violations, remediation efforts, and reputational harm,” he said. “Federal agencies handling vast amounts of sensitive or classified information, financial institutions, and health care organizations are particularly vulnerable,” he said. “These sectors collect and analyze vast amounts of high-value data, making AI tools attractive. But without proper vetting, these tools could be easily exploited.” Shadow AI Everywhere and Easy To Use Nicole Carignan, SVP for security and AI strategy at Darktrace, a global cybersecurity AI company, predicts an explosion of tools that utilize AI and generative AI within enterprises and on devices used by employees. “In addition to managing AI tools that are built in-house, security teams will see a surge in the volume of existing tools that have new AI features and capabilities embedded, as well as a rise in shadow AI,” she told TechNewsWorld. “If the surge remains unchecked, this raises serious questions and concerns about data loss prevention, as well as compliance concerns as new regulations start to take effect.” “That will drive an increasing need for AI asset discovery — the ability for companies to identify and track the use of AI systems throughout the enterprise,” she said. “It is imperative that CIOs and CISOs dig deep into new AI security solutions, asking comprehensive questions about data access and visibility.” Shadow AI has become so rampant because it is everywhere and easy to access through free tools, maintained Komprise’s Subramanian. “All you need is a web browser,” she said. “Enterprise users can inadvertently share company code snippets or corporate data when using these Gen AI tools, which could create data leakage.” “These tools are growing and changing exponentially,” she continued. “It’s really hard to keep up. As the IT leader, how do you track this and determine the risk? Managers might be looking the other way because their teams are getting more done. You may need fewer contractors and full-time employees. But I think the risk of the tools is not well understood.” “The low, or in some cases non-existent, learning curve associated with using Gen AI services has led to rapid adoption, regardless of prior experience with these services,” added Satyam Sinha, CEO and co-founder of Acuvity, a provider of runtime Gen AI security and governance solutions, in Sunnyvale, Calif. “Whereas shadow IT focused on addressing a specific challenge for particular employees or departments, shadow AI addresses multiple challenges for multiple employees and departments. Hence, the greater appeal,” he said. “The abundance and rapid development of Gen AI services also means employees can find the right solution. Of course, all these traits have direct security implications.” Banning AI Tools Backfires To support innovation while minimizing the threat of shadow AI, enterprises must take a three-pronged approach, asserted Kris Bondi, CEO and co-founder of Mimoto, a threat detection and response company in San Francisco. They must educate employees on the dangers of unsupported, unmonitored AI tools, create company protocols for what is not acceptable use of unauthorized AI tools, and, most importantly, provide AI tools that are sanctioned. “Explaining why one tool is sanctioned and another isn’t greatly increases compliance,” she told TechNewsWorld. “It does not work for a company to have a zero-use mandate. In fact, this results in an increase in stealth use of shadow AI.” In the very near future, more and more applications will be leveraging AI in different forms, so the reality of shadow AI will be present more than ever, added AppOmni’s Ruzzi. “The best strategy here is employee training and AI usage monitoring,” she said. “It will become crucial to have in place a powerful SaaS security tool that can go beyond detecting direct AI usage of chatbots to detect AI usage connected to other applications,” she continued, “allowing for early discovery, proper risk assessment, and containment to minimize possible negative consequences.” “Shadow AI is just the beginning,” KnowBe4’s McQuiggan added. “As more teams use AI, the risks grow.” He recommended that companies start small, identify what’s being used, and build from there. They should also get legal, HR, and compliance involved. “Make AI governance part of your broader security program,” he said. “The sooner you start, the better you can manage what comes next.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in IT Leadership #pros #extremely #worried #about #shadow -
Security Is Not Privacy, Part 1: The Mobile Target
In technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.”
I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them.
So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.”
Security vs. Privacy: Definitions That Matter
In the context of information technology, what exactly are security and privacy?
Security is the property of denying unauthorized parties from accessing or altering your data.
Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities.
As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples.
Let’s start with an instance where security applies, but privacy does not.
Spotify uses digital rights managementsoftware to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your devicejust as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application.
The data on Spotifyare secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data.
A more complex example of security without privacy is social media.
When you sign up for a social media platform, you accept an end-user license agreementauthorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties.
In other words, if everyone who is allowedto have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data.
On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control.
Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data, it is secure by definition because only authorized parties can access your data.
Mobile Devices: Secure but Not Private
Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is.
One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure.
Why is that?
Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source.
With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature.
Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored.
However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied.
Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes.
Mobile Security Isn’t Designed With You in Mind
The most powerful mobile security control is the denial of root privileges to all applications and users. While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either.
By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor.
I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary.
Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy.
Please stay tuned.
#security #not #privacy #part #mobileSecurity Is Not Privacy, Part 1: The Mobile TargetIn technical fields like information technology, definitions are fundamental. They are the building blocks for constructing useful applications and systems. Yet, despite this, it’s easy to assume a term’s definition and wield it confidently before discovering its true meaning. The two closely related cases that stand out to me are “security” and “privacy.” I say this with full awareness that, in my many writings on information security, I never adequately distinguished these two concepts. It was only after observing enough conflation of these terms that I resolved to examine my own casual treatment of them. So, with the aim of solidifying my own understanding, let’s properly differentiate “information security” and “information privacy.” Security vs. Privacy: Definitions That Matter In the context of information technology, what exactly are security and privacy? Security is the property of denying unauthorized parties from accessing or altering your data. Privacy is the property of preventing the observation of your activities by any third parties to whom you do not expressly consent to observe those activities. As you can see, these principles are related, which is one reason why they’re commonly interchanged. This distinction becomes comprehensible with examples. Let’s start with an instance where security applies, but privacy does not. Spotify uses digital rights managementsoftware to keep its media secure but not private. DRM is a whole topic of its own, but it essentially uses cryptography to enforce copyright. In Spotify’s case, it’s what constitutes streaming rather than just downloading: the song’s file is present on your devicejust as if you’d downloaded it, but Spotify’s DRM cryptography prevents you from opening the file without the Spotify application. The data on Spotifyare secure because only users of the application can stream audio, and streamed content can’t be retained, opened, or transmitted to non-users. However, Spotify’s data is not private because nearly anyone with an email address can be a user. Thus, in practice, the company cannot control who exactly can access its data. A more complex example of security without privacy is social media. When you sign up for a social media platform, you accept an end-user license agreementauthorizing the platform to share your data with its partners and affiliates. Your data stored with “authorized parties” on servers controlled by the platform and its affiliates would be considered secure, provided all these entities successfully defend your data against theft by unauthorized parties. In other words, if everyone who is allowedto have your data encrypts it in transit and at rest, insulates and segments their networks, etc., then your data is secure no matter how many affiliates receive it. In practice, the more parties that have your data, the more likely it is that any one of them is breached, but in theory, they could all defend your data. On the other hand, any data you fork over to the social network is not private because you can’t control who uses your data and how. As soon as your data lands on the platform’s servers, you can’t restrict what they do with it, including sharing your data with other entities, which you also can’t control. Both examples illustrate security without privacy. That’s because privacy entails security, but not the reverse. All squares are rectangles, but not all rectangles are squares. If you have privacy, meaning you can completely enforce how any party uses your data, it is secure by definition because only authorized parties can access your data. Mobile Devices: Secure but Not Private Casually mixing security and privacy can lead people to misunderstand the information security properties that apply to their data in any given scenario. By reevaluating for ourselves whether a given technology affords us security and privacy, we can have a more accurate understanding of how accessible our data really is. One significant misconception I’ve noticed concerns mobile devices. I get the impression that the digital privacy content sphere regards mobile devices as not secure because they aren’t private. But while mobile is designed not to be private, it is specifically designed to be secure. Why is that? Because the value of data is in keeping it in your hands and out of your competitor’s. If you collect data but anyone else can grab your copy, you are not only at no advantage but also at a disadvantage since you’re the only party that spent time and money to collect it from the source. With modest scrutiny, we’ll find that every element of a mobile OS that might be marketed as a privacy feature is, in fact, strictly a security feature. Cybersecurity professionals have hailed application permissions as a major stride in privacy. But whom are they designed to help? These menus apply to applications that request access to certain hardware, from microphones and cameras to flash memory storage and wireless radios. This access restriction feature serves the OS developer by letting users lock out as much of their competition as possible from taking their data. The mobile OS developer controls the OS with un-auditable compiled code. For all you know, permission controls on all the OS’s native apps could be ignored. However, even if we assume that the OS developer doesn’t thwart your restrictions on their own apps, the first-party apps still enjoy pride of place. There are more of them; they are preinstalled on your device, facilitate core mobile device features, require more permissions, and often lose core functions when those permissions are denied. Mobile OSes also sandbox every application, forcing each to run in an isolated software environment, oblivious to other applications and the underlying operating system. This, too, benefits the OS vendor. Like the app permission settings, this functionality makes it harder for third parties to grab the same data the OS effortlessly ingests. The OS relies on its own background processes to obtain the most valuable data and walls off every other app from those processes. Mobile Security Isn’t Designed With You in Mind The most powerful mobile security control is the denial of root privileges to all applications and users. While it goes a long way toward keeping the user’s data safe, it is just as effective at subjecting everything and everyone using the device to the dictates of the OS. The security advantage is undeniable: if your user account can’t use root, then any malware that compromises it can’t either. By the same token, because you don’t have complete control over the OS, you are unable to reconfigure your device for privacy from the OS vendor. I’m not disparaging any of these security controls. All of them reinforce the protection of your data. I’m saying that they are not done primarily for the user’s benefit; that is secondary. Those of you familiar with my work might see the scroll bar near the bottom of this page and wonder why I haven’t mentioned Linux yet. The answer is that desktop operating systems, my preferred kind of Linux OS, benefit from their own examination. In a follow-up to this piece, I will discuss the paradox of desktop security and privacy. Please stay tuned. #security #not #privacy #part #mobile0 Comments ·0 Shares ·0 Reviews -
DexCare AI Platform Tackles Health Care Access, Cost Crisis
Care management platform DexCare is applying artificial intelligencein an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure.
As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learningto health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems.
No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience.
The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources.
Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap.
“Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld.
Improving Patient Access With Predictive AI
Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another.
Derek Streat, Co-founder of DexCare
With the goal of a better customer experience, DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat.
He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options.
“It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.”
Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system.
From Providence Prototype to Scalable Solution
Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System.
After four years of development and validation, he launched the technology for broader use across the health care industry.
“It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said.
Digital Marquee for Consumers, Service Management for Providers
DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks.
Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources.
“It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat.
The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization.
How DexCare Integrates With CRM Platforms
According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle.
“We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered.
Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies.
For instance, fulfillment technologies book appointments and supply synchronous virtual solutions.
“Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat.
He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record, which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans.
“The application programming interfacegives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat.
AI’s Long-Term Role in Health Care Access
Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end.
According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said.
He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network.
“That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said.
How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers.
“Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.”
Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised.
Why Generative AI Lags in Health Care
Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems.
He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale.
“Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said.
Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent.
“We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded.
#dexcare #platform #tackles #health #careDexCare AI Platform Tackles Health Care Access, Cost CrisisCare management platform DexCare is applying artificial intelligencein an innovative way to fix health care access issues. Its AI-driven platform helps health systems overcome rising costs, limited capacity, and fragmented digital infrastructure. As Americans face worsening health outcomes and soaring costs, DexCare Co-founder Derek Streat sees opportunity in the crisis and is leading a push to apply AI and machine learningto health care’s toughest operational challenges — from overcrowded emergency rooms to disconnected digital systems. No stranger to using AI to solve health care issues, Streat is guiding DexCare as it leverages AI and ML to confront the industry’s most persistent pain points: spiraling costs, resource constraints, and the impossible task of doing more with less. Its platform helps liberate data silos to orchestrate care better and deliver a “shoppable” experience. The combination unlocks patient access to care and optimizes health care resources. DexCare enables health systems to see 40% more patients with existing clinical resources. Streat readily admits that some advanced companies use AI to enhance clinical and medical research. However, advanced AI tools such as conversational generative AI are less common in the health care access space. DexCare addresses that service gap. “Access is broken, and our fundamental belief is that there haven’t been enough solutions to balance patient, provider, and health system needs and objectives,” he told TechNewsWorld. Improving Patient Access With Predictive AI Achieving that balance depends on the underlying information drawn from health care providers’ neural networks, ML models, classification systems, and advancements in generative AI. These elements build on one another. Derek Streat, Co-founder of DexCare With the goal of a better customer experience, DexCare’s platform helps care providers optimize the algorithm so everyone benefits. The focus is on ensuring patients get what matches their intent and motivations while respecting the providers’ capacity and needs, explained Streat. He describes the platform’s technology as a foundational pyramid based on data that AI optimizes and manages. Those components ensure high-fidelity outcome predictions for recommended care options. “It could be a doctor in a clinic or a nurse in a virtual care system,” he suggested. “I’m not talking about clinical outcomes. I’m talking about what you’re looking for.” Ultimately, that managed balance will not burn out all your providers. It will make this a sustainable business line for the health system. From Providence Prototype to Scalable Solution Streat defined DexCare as an access optimization company. He shared that the platform originated from a ground-floor build within the Providence Health System. After four years of development and validation, he launched the technology for broader use across the health care industry. “It’s well tested and very effective in what it does. That allowed us to have something scalable across organizations as well. Our expansion makes health care more discoverable to consumers and patients and more sustainable for medical providers and the health systems we serve,” he said. Digital Marquee for Consumers, Service Management for Providers DexCare’s AI works on multiple levels. It provides health care system or medical facility services as a contact center. That part attracts and curates audiences, consumers, and patients. Its digital assets could be websites, landing pages, or screening kiosks. Another part of the platform intelligently navigates patients to the safest and best care option. This process engages the accumulated data and automatically allocates the health system’s resources. “It manages schedules and available staff and facilities and automatically allocates them when and where they can be most productively employed,” explained Streat. The platform excels at load balancing. It uses AI to rationalize all those components. The decision engine uses AI to ensure that the selected resources and needed services match so the medical treatment can be done most efficiently and effectively to accommodate the patient and the organization. How DexCare Integrates With CRM Platforms According to Streat, DexCare is not customer relationship management software. Instead, the platform is a tie-in that infuses its AI tools and data services that blend with other platforms such as Salesforce and Oracle. “We make it as flexible as we can. It is pretty scalable to the point where now we can touch about 20% of the U.S. population through our health system partners,” he offered. Patients do not realize they are interacting with the DexCare-powered experience console under the brands Kaiser, Providence, and SSM Health, some of the DexCare platform’s health systems users. The platform is flexible and adapts to the needs of various health agencies. For instance, fulfillment technologies book appointments and supply synchronous virtual solutions. “Whatever the modality or setting is, we can either connect with whatever you’re using as a health system, or you can use your own underlying pieces as well,” said Streat. He noted that the intelligent data acquisition built into the DexCare platform accesses the electronic medical record, which includes patients’ demographics, medical history, diagnoses, medications, allergies, immunization records, lab results, and treatment plans. “The application programming interfacegives us real-time availability, allows us to predict a certain provider’s capacity, and maintains EMR as a source of truth,” said Streat. AI’s Long-Term Role in Health Care Access Health care management by conversational generative AI provides insights into where organizations struggle, need to adjust their operations, or reassign staff to manage patient flow. That all takes place on the platform’s back end. According to Streat, the front-end value proposition is pretty simple. It helps get 20% to 30% more patients into the health system. Organizations generate nine times the initial visit value in downstream revenue for additional services, Streat said. He assured that the other part of the value proposition is a lower marginal cost of delivering each visit. That results from matching resources with patients in a way that allows balancing the load across the organization’s network. “That depends on the specific use case, but we find up to a 40% additional capacity within the health system without hiring additional resources,” he said. How? That is where the underlying AI data comes into play. It helps practitioners make more informed decisions about which patients should be matched with which providers. “Not everybody needs to see an expensive doctor in a clinic,” Streat contended. “Sometimes, a nurse in a virtual visit or educational information will be just fine.” Despite all the financial metrics, patients want medical treatment and to move on, which is really what the game is here, he surmised. Why Generative AI Lags in Health Care Streat lamented the rapidly developing sophistication of generative AI, which includes conversational interfaces, analytical capability, and predictive mastery. These technologies are being applied throughout other industries and businesses, but are not yet widely adopted in health care systems. He indicated that part of that lag is that health care access needs are different and not as suited for conversational AI solutions hastily layered onto legacy systems. Ultimately, changing health care requires delivering things at scale. “Within a health system, its infrastructure, and the plumbing required to respect the systems of records, it’s just a different world,” he said. Streat sees AI making it possible for us to move away from searching through a long list of doctors online to booking through a robot operator with a pleasant accent. “We will focus on the back-end intelligence and continue to apply it to these lower-friction ways for people to interact with the health system. That’s incredibly exciting to me,” he concluded. #dexcare #platform #tackles #health #care0 Comments ·0 Shares ·0 Reviews -
AMD at Computex 2025: Making the Case for an AI Powerhouse
With sweeping product announcements across GPUs, CPUs, and AI PCs, AMD is signaling that its transformation from a high-performance computing stalwart to a full-spectrum AI leader is well underway. The post AMD at Computex 2025: Making the Case for an AI Powerhouse appeared first on TechNewsWorld.
#amd #computex #making #case #powerhouseAMD at Computex 2025: Making the Case for an AI PowerhouseWith sweeping product announcements across GPUs, CPUs, and AI PCs, AMD is signaling that its transformation from a high-performance computing stalwart to a full-spectrum AI leader is well underway. The post AMD at Computex 2025: Making the Case for an AI Powerhouse appeared first on TechNewsWorld. #amd #computex #making #case #powerhouse0 Comments ·0 Shares ·0 Reviews -
Cell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey
Cell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey
By John P. Mello Jr.
May 21, 2025 5:00 AM PT
ADVERTISEMENT
Proven Tactics to Scale SMB Software Companies in Competitive Markets
Gain market share, boost customer acquisition, and improve operational strength. Get the SMB Software Playbook for Expansion & Growth now -- essential reading for growing tech firms. Free Download.
What a difference a year makes. Twelve months ago, cell phone satisfaction was riding high in the American Customer Satisfaction Index, which surveys U.S. consumers. This year, it has hit an all-time low.
The ACSI, a national economic indicator for over 25 years, reported Tuesday that after reaching an all-time high in 2024, cell phone satisfaction fell to its lowest point in a decade, scoring 78 on a scale of 100.
“Brands keep racing to add new capabilities, yet customers still judge smartphones by the fundamentals,” Forrest Morgeson, an associate professor of marketing at Michigan State University and Director of Research Emeritus at the ACSI, said in a statement.
“Only when companies strengthen the essentials — battery life, call reliability, and ease of use — does innovation truly deliver lasting satisfaction,” he continued.
“I totally agree,” added Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, Calif.
“Battery life is the number one issue we see in our smartphone surveys,” he told TechNewsWorld. “And call reliability is always a concern because dropped calls or disconnects during social media sessions are frustrating.”
People are still getting excited about new features, but they still want greater battery life and their phones to be easier to use than before, countered Bryan Cohen, CEO of Opn Communication, a telecommunications agency based in Sheridan, Wyo.
“Take my father. He’s 72 years old, and he wanted an iPhone 16,” he told TechNewsWorld. “I finally went out and got it for him. He got really excited about AI, but then he gets frustrated with it because it’s not easy to use, and he gets mad at the phone.”
Phone Makers Take a Hit
Dissatisfaction with cell phones affected manufacturers’ ratings, too, according to the ACSI study, which was based on 27,494 completed surveys. Both Apple’s and Samsung’s ratings slipped a point to 81, although Samsung had a slight edge over Apple in the 5G phone category. Both, however, had significant leads in satisfaction compared to their nearest rivals, Google and Motorola, which slid three points to 75.
The ACSI researchers also found a widening gap in satisfaction between owners of 5G and non-5G phones. Satisfaction with 5G phones fell two points but still posted a respectable score of 80. Meanwhile, satisfaction with phones using legacy technology plummeted seven points to 68.
“It’s very important to understand that the mobile networks in the U.S. use different spectrum bands,” explained John Strand of Denmark-based Strand Consulting, a consulting firm with a focus on global telecom.
“If you have an old phone, it may not run so well on all spectrum bands,” he told TechNewsWorld. “It certainly won’t work as well as a new phone with a newer chipset.”
The dissatisfaction can also be due to a technology misunderstanding, added Opn Comm’s Cohen. “People will have a phone for four or five years and not understand their phone might not have been built for 5G,” he explained.
“People expect their LTE phones to automatically go to the next generation,” he continued. “That’s not necessarily the case. Their phone might not be 5G compatible, just like some phones still are not eSIM compatible.”
ISPs See Modest Satisfaction Improvements
On the plus side, the study found that satisfaction with ISPs, including fiber and non-fiber services, ticked up a point to 72. Satisfaction with fiber declined by one point, to 75, the study noted, while non-fiber jumped three points, to 70.
The improved satisfaction rating can be attributed to new investments by the carriers, said Creative Strategies’ Bajarin. “They are gaining new technologies that boost their signal, including some redundancy technologies to make their lines more stable,” he explained.
The study noted that AT&T Fiber is leading the fiber segment in satisfaction, scoring a 78 on the index despite a three-point drop. Hot on the heels of AT&T are Google Fiber and Verizon FiOS, at 76, and Xfinity Fiber, at 75.
A big gainer in the fiber segment was Optimum, which jumped eight points to 71. The ACSI researchers explained that Optimum’s satisfaction burst was driven primarily by its efforts to add value by strengthening the quality of its customer service.
The remaining group of smaller ISPs didn’t fare as well. They dropped nine points to 70. The study noted that “all elements of the fiber customer experience have worsened over the past year, with notable decreases in measures relating to the quality of internet service.”
In the non-fiber segment, T-Mobile gained three points to tie leader AT&T at 78. According to the study, T-Mobile has been successful in improving the consistency of its non-fiber service while adding value through improved customer service and plan options. Not far behind the leaders is Verizon, which saw its satisfaction score jump four points to 77.
Kinetic by Windstream was a big gainer in the non-fiber segment. It surged 11 points to 62. “By making significant improvements in practical service metrics, Windstream drives customer perceptions of the value of its Kinetic service higher,” the study explained.
Wireless Service Satisfaction Slips
Declining satisfaction afflicted the wireless phone service industry, according to the ACSI. Overall, the industry dropped a point to 75. Its segments also saw satisfaction declines: value mobile virtual network operatorsslid three points to 78; mobile network operatorsfell one point to 75; and full-service MVNOs slipped three points to 74.
Individual MNO players in the market experienced similar declines, with T-Mobile dropping one point to 76, AT&T falling five points to 74, and UScellular losing three points to 72. Verizon was the only gainer in the top four, with a one-point increase to 75.
The ACSI researchers explained that in addition to measuring satisfaction with operators, the study measures satisfaction with call quality and network capability. Over the last year, AT&T suffered the largest decrease in both, dropping six points to 77 for call quality and eight points to 76 for network capability.
A new feature of this year’s telecommunication and cell phone report is the addition of smartwatches. The study found that Samsung, with a score of 83, edged Apple Watch, which scored 80 in satisfaction. Fitbit finished third with a score of 72.
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
Related Stories
More by John P. Mello Jr.
view all
More in Smartphones
#cell #phone #satisfaction #tumbles #10yearCell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI SurveyCell Phone Satisfaction Tumbles to 10-Year Low in Latest ACSI Survey By John P. Mello Jr. May 21, 2025 5:00 AM PT ADVERTISEMENT Proven Tactics to Scale SMB Software Companies in Competitive Markets Gain market share, boost customer acquisition, and improve operational strength. Get the SMB Software Playbook for Expansion & Growth now -- essential reading for growing tech firms. Free Download. What a difference a year makes. Twelve months ago, cell phone satisfaction was riding high in the American Customer Satisfaction Index, which surveys U.S. consumers. This year, it has hit an all-time low. The ACSI, a national economic indicator for over 25 years, reported Tuesday that after reaching an all-time high in 2024, cell phone satisfaction fell to its lowest point in a decade, scoring 78 on a scale of 100. “Brands keep racing to add new capabilities, yet customers still judge smartphones by the fundamentals,” Forrest Morgeson, an associate professor of marketing at Michigan State University and Director of Research Emeritus at the ACSI, said in a statement. “Only when companies strengthen the essentials — battery life, call reliability, and ease of use — does innovation truly deliver lasting satisfaction,” he continued. “I totally agree,” added Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, Calif. “Battery life is the number one issue we see in our smartphone surveys,” he told TechNewsWorld. “And call reliability is always a concern because dropped calls or disconnects during social media sessions are frustrating.” People are still getting excited about new features, but they still want greater battery life and their phones to be easier to use than before, countered Bryan Cohen, CEO of Opn Communication, a telecommunications agency based in Sheridan, Wyo. “Take my father. He’s 72 years old, and he wanted an iPhone 16,” he told TechNewsWorld. “I finally went out and got it for him. He got really excited about AI, but then he gets frustrated with it because it’s not easy to use, and he gets mad at the phone.” Phone Makers Take a Hit Dissatisfaction with cell phones affected manufacturers’ ratings, too, according to the ACSI study, which was based on 27,494 completed surveys. Both Apple’s and Samsung’s ratings slipped a point to 81, although Samsung had a slight edge over Apple in the 5G phone category. Both, however, had significant leads in satisfaction compared to their nearest rivals, Google and Motorola, which slid three points to 75. The ACSI researchers also found a widening gap in satisfaction between owners of 5G and non-5G phones. Satisfaction with 5G phones fell two points but still posted a respectable score of 80. Meanwhile, satisfaction with phones using legacy technology plummeted seven points to 68. “It’s very important to understand that the mobile networks in the U.S. use different spectrum bands,” explained John Strand of Denmark-based Strand Consulting, a consulting firm with a focus on global telecom. “If you have an old phone, it may not run so well on all spectrum bands,” he told TechNewsWorld. “It certainly won’t work as well as a new phone with a newer chipset.” The dissatisfaction can also be due to a technology misunderstanding, added Opn Comm’s Cohen. “People will have a phone for four or five years and not understand their phone might not have been built for 5G,” he explained. “People expect their LTE phones to automatically go to the next generation,” he continued. “That’s not necessarily the case. Their phone might not be 5G compatible, just like some phones still are not eSIM compatible.” ISPs See Modest Satisfaction Improvements On the plus side, the study found that satisfaction with ISPs, including fiber and non-fiber services, ticked up a point to 72. Satisfaction with fiber declined by one point, to 75, the study noted, while non-fiber jumped three points, to 70. The improved satisfaction rating can be attributed to new investments by the carriers, said Creative Strategies’ Bajarin. “They are gaining new technologies that boost their signal, including some redundancy technologies to make their lines more stable,” he explained. The study noted that AT&T Fiber is leading the fiber segment in satisfaction, scoring a 78 on the index despite a three-point drop. Hot on the heels of AT&T are Google Fiber and Verizon FiOS, at 76, and Xfinity Fiber, at 75. A big gainer in the fiber segment was Optimum, which jumped eight points to 71. The ACSI researchers explained that Optimum’s satisfaction burst was driven primarily by its efforts to add value by strengthening the quality of its customer service. The remaining group of smaller ISPs didn’t fare as well. They dropped nine points to 70. The study noted that “all elements of the fiber customer experience have worsened over the past year, with notable decreases in measures relating to the quality of internet service.” In the non-fiber segment, T-Mobile gained three points to tie leader AT&T at 78. According to the study, T-Mobile has been successful in improving the consistency of its non-fiber service while adding value through improved customer service and plan options. Not far behind the leaders is Verizon, which saw its satisfaction score jump four points to 77. Kinetic by Windstream was a big gainer in the non-fiber segment. It surged 11 points to 62. “By making significant improvements in practical service metrics, Windstream drives customer perceptions of the value of its Kinetic service higher,” the study explained. Wireless Service Satisfaction Slips Declining satisfaction afflicted the wireless phone service industry, according to the ACSI. Overall, the industry dropped a point to 75. Its segments also saw satisfaction declines: value mobile virtual network operatorsslid three points to 78; mobile network operatorsfell one point to 75; and full-service MVNOs slipped three points to 74. Individual MNO players in the market experienced similar declines, with T-Mobile dropping one point to 76, AT&T falling five points to 74, and UScellular losing three points to 72. Verizon was the only gainer in the top four, with a one-point increase to 75. The ACSI researchers explained that in addition to measuring satisfaction with operators, the study measures satisfaction with call quality and network capability. Over the last year, AT&T suffered the largest decrease in both, dropping six points to 77 for call quality and eight points to 76 for network capability. A new feature of this year’s telecommunication and cell phone report is the addition of smartwatches. The study found that Samsung, with a score of 83, edged Apple Watch, which scored 80 in satisfaction. Fitbit finished third with a score of 72. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Smartphones #cell #phone #satisfaction #tumbles #10year0 Comments ·0 Shares ·0 Reviews -
Democratic AI Revolution: Power to the People and Code to the Masses
In my town of Bend, Oregon, where the spirit of independence and community thrives, the concept of “Democratic AI” resonates in a uniquely powerful way. In a world increasingly shaped by algorithms and artificial intelligence, the notion of democratizing its creation, access, and governance offers a powerful counterpoint to the centralized control often associated with Big Tech.
But what exactly is Democratic AI, and why might it be the best, perhaps even the only truly sustainable path forward for this transformative technology? Buckle up, because we’re about to dive into the digital town hall of the future.
Then, we’ll close with my Product of the Week: Slate’s new pickup truck, backed by Jeff Bezos, which could transform the EV market.
What Democratic AI Really Means
At its core, Democratic AI is a philosophy and set of practices aimed at distributing the power of AI more broadly. It encompasses several key principles:
Openness and Transparency: The underlying code, data, and models are often open-source or readily accessible for scrutiny and modification. Think of it as the difference between a proprietary black box and a transparent, well-documented library.
Decentralization of Development: Instead of being solely the domain of large corporations with vast resources, Democratic AI encourages contributions from a diverse range of individuals, researchers, smaller organizations, and even governments. It’s the digital equivalent of a community barn-raising.
Participatory Governance: The ethical guidelines, development priorities, and deployment strategies are shaped through broader stakeholder involvement rather than top-down mandates. Imagine citizens having a say in how AI is used in their communities.
Accessibility and Affordability: The tools and resources needed to develop and utilize AI are made as widely available and affordable as possible, breaking down barriers to entry. It’s about leveling the playing field so that innovation isn’t limited by deep pockets.
Data Sovereignty and Privacy: Individuals and communities retain greater control over their data, and privacy is prioritized when developing and deploying AI systems. It’s about ensuring AI serves people, not the other way around.
Democratic AI May Be the Best Path
So, why is this open and collaborative approach potentially superior to more traditional, often proprietary, AI models? Here are several advantages it brings to the table:
Faster and More Diverse Innovation: When you open the floodgates to contributions from a global community, the pace of innovation explodes. Diverse perspectives and skill sets lead to more creative solutions and the exploration of a wider range of ideas, outpacing what any single organization could accomplish. It’s like having a thousand brilliant minds tackling a problem instead of just a handful.
Increased Trust and Accountability: Transparency in code and data allows for greater scrutiny, making it easier to identify and address biases, errors, and potential security vulnerabilities. When the workings of an AI are open for all to see, there’s a greater sense of trust and accountability. It’s harder to hide digital shenanigans in broad daylight.
Reduced Vendor Lock-In and Monopoly Risk: By promoting open standards and interoperability, Democratic AI reduces reliance on proprietary platforms, fostering a more competitive landscape and mitigating the risks associated with the dominance of a few powerful AI providers. It’s about avoiding digital monopolies where a few companies control core AI resources.
Alignment with Public Good: With broader participation in governance and ethical considerations, Democratic AI is more likely to be aligned with the public good and societal values rather than solely driven by corporate profits or narrow interests. It’s about building AI that serves humanity, not just shareholders.
Empowerment and Skill Development: Democratizing AI empowers individuals and smaller organizations to become creators and innovators, fostering a broader understanding of the technology and driving the development of local expertise. It’s about turning passive consumers into active participants in the AI revolution.
Governments and Companies Advancing Democratic AI
While the concept is still evolving, several governments and companies are dipping their toes, or even diving headfirst, into the waters of Democratic AI:
The European Union: With its emphasis on digital sovereignty and open-source initiatives, the EU actively promotes a more democratic and human-centric approach to AI development and regulation.
Various Open-Source AI Initiatives: Projects like Hugging Face, with its open platform for models and datasets, and initiatives around open data for AI training, embody the spirit of Democratic AI.
Decentralized AI Platforms: Emerging projects are exploring blockchain and other decentralized technologies to create more open and community-governed AI infrastructure.
Government-Backed Open AI Research: Some governments are supporting open research efforts to promote collaboration and transparency in AI development. For example, Canada funds its CIFAR AI Chairs program, and the U.K. advances similar goals through the Alan Turing Institute.
Benefits: A Brighter Algorithmic Future
Aggressively pursuing Democratic AI could deliver transformative results:
More Ethical and Fair AI Systems: Open scrutiny and diverse participation can help mitigate biases embedded in data and algorithms, leading to fairer and more equitable AI outcomes.
AI Tailored to Diverse Needs: A decentralized and collaborative approach can foster the development of AI solutions that address the specific needs and contexts of diverse communities and cultures.
Greater Public Trust in AI: Transparency and participatory governance can build greater public trust in AI systems, fostering wider adoption and acceptance.
Accelerated Solutions to Global Challenges: By harnessing the collective intelligence of a global community, Democratic AI can accelerate the development of solutions to pressing global challenges, from climate change to health care.
Where Democratic AI Stands Today
The concept of Democratic AI is still in its relatively early stages. While the principles of open source have a long history in software development, applying them comprehensively to the complex world of AI — including data, models, and governance — is a more recent endeavor.
We are likely in the “seed” or early “sapling” phase of this movement. While there are promising initiatives and growing awareness, widespread adoption and the establishment of robust Democratic AI ecosystems will take time, research, and concerted effort from individuals, organizations, and governments.
Wrapping Up: A People-Led AI Future
Democratic AI offers a compelling vision for the future of artificial intelligence, one where power is distributed, innovation is accelerated through collaboration, and the technology serves humanity’s broader interests. While the path to realizing this vision is still unfolding, the principles of openness, transparency, and participatory governance hold immense promise.
As we navigate the transformative power of AI, embracing a democratic approach might not just be the best way forward; it might be the only way to ensure that this powerful technology truly benefits all of us, here in Bend and across the interconnected world. The seeds of a people’s AI are being planted, and it’s up to us to cultivate a future where everyone shares its fruits.
Slate Electric Pickup – Bezos-Backed Bargain With a Twist
The buzz around the Slate electric pickup truck has been palpable, and after its recent unveiling, it’s clear why.
This no-nonsense EV, with ties to Amazon founder Jeff Bezos through his investment in the parent company Re:Build Manufacturing, is making waves with its incredibly aggressive starting price of just In a market where electric trucks often flirt with six-figure sums, Slate is aiming for the heart of the value-conscious buyer.
What makes the Slate EV truly intriguing is its innovative modular construction. Reportedly, the pickup can be converted into a compact SUV with a relatively inexpensive kit. This transformative capability addresses the reality of how many pickup owners actually use their vehicles.
For the vast majority, the truck bed often sits empty or carries lighter loads, while the need for passenger space and enclosed cargo for family and everyday life is more frequent. Slate’s ability to morph into an SUV offers unparalleled versatility, essentially providing two vehicles in one at an exceptionally low total cost of ownership.
Slate’s price point makes it especially appealing to buyers focused on practicality and value. At it’s positioned to be one of the most affordable EVs on the market, let alone an electric truck. This price point opens electric mobility to buyers who were previously priced out of the EV market.
Considering that most pickup truck owners primarily use their vehicles for commuting, errands, and occasional light hauling, Slate’s EV core functionality likely meets their needs without the excess capacity and exorbitant cost of larger, more powerful trucks.
Adding to its affordability, Slate’s truck is reportedly still eligible for the U.S. federal tax credit for electric vehicles, currently as much as For those who qualify, this effectively brings the starting price down to a mere making it an absolute steal and a compelling alternative to many gasoline-powered used vehicles.
While specific performance figures are still emerging, early reports suggest a respectable range suitable for daily driving, a powertrain adequate for typical truck duties, and the added weight of its modular components. Its focus on affordability likely means it won’t boast the blistering acceleration of high-end EVs, but it promises a practical and efficient driving experience.
The Slate electric pickup, with its Bezos connection, groundbreaking modularity, and incredibly low price, could very well be the value king of the electric truck revolution. The clear value of this Slate pickup — and the fact that it’s likely giving Elon Musk nightmares — makes it my Product of the Week.
Credit: The Slate pickup truck images are courtesy of Slate.
#democratic #revolution #power #people #codeDemocratic AI Revolution: Power to the People and Code to the MassesIn my town of Bend, Oregon, where the spirit of independence and community thrives, the concept of “Democratic AI” resonates in a uniquely powerful way. In a world increasingly shaped by algorithms and artificial intelligence, the notion of democratizing its creation, access, and governance offers a powerful counterpoint to the centralized control often associated with Big Tech. But what exactly is Democratic AI, and why might it be the best, perhaps even the only truly sustainable path forward for this transformative technology? Buckle up, because we’re about to dive into the digital town hall of the future. Then, we’ll close with my Product of the Week: Slate’s new pickup truck, backed by Jeff Bezos, which could transform the EV market. What Democratic AI Really Means At its core, Democratic AI is a philosophy and set of practices aimed at distributing the power of AI more broadly. It encompasses several key principles: Openness and Transparency: The underlying code, data, and models are often open-source or readily accessible for scrutiny and modification. Think of it as the difference between a proprietary black box and a transparent, well-documented library. Decentralization of Development: Instead of being solely the domain of large corporations with vast resources, Democratic AI encourages contributions from a diverse range of individuals, researchers, smaller organizations, and even governments. It’s the digital equivalent of a community barn-raising. Participatory Governance: The ethical guidelines, development priorities, and deployment strategies are shaped through broader stakeholder involvement rather than top-down mandates. Imagine citizens having a say in how AI is used in their communities. Accessibility and Affordability: The tools and resources needed to develop and utilize AI are made as widely available and affordable as possible, breaking down barriers to entry. It’s about leveling the playing field so that innovation isn’t limited by deep pockets. Data Sovereignty and Privacy: Individuals and communities retain greater control over their data, and privacy is prioritized when developing and deploying AI systems. It’s about ensuring AI serves people, not the other way around. Democratic AI May Be the Best Path So, why is this open and collaborative approach potentially superior to more traditional, often proprietary, AI models? Here are several advantages it brings to the table: Faster and More Diverse Innovation: When you open the floodgates to contributions from a global community, the pace of innovation explodes. Diverse perspectives and skill sets lead to more creative solutions and the exploration of a wider range of ideas, outpacing what any single organization could accomplish. It’s like having a thousand brilliant minds tackling a problem instead of just a handful. Increased Trust and Accountability: Transparency in code and data allows for greater scrutiny, making it easier to identify and address biases, errors, and potential security vulnerabilities. When the workings of an AI are open for all to see, there’s a greater sense of trust and accountability. It’s harder to hide digital shenanigans in broad daylight. Reduced Vendor Lock-In and Monopoly Risk: By promoting open standards and interoperability, Democratic AI reduces reliance on proprietary platforms, fostering a more competitive landscape and mitigating the risks associated with the dominance of a few powerful AI providers. It’s about avoiding digital monopolies where a few companies control core AI resources. Alignment with Public Good: With broader participation in governance and ethical considerations, Democratic AI is more likely to be aligned with the public good and societal values rather than solely driven by corporate profits or narrow interests. It’s about building AI that serves humanity, not just shareholders. Empowerment and Skill Development: Democratizing AI empowers individuals and smaller organizations to become creators and innovators, fostering a broader understanding of the technology and driving the development of local expertise. It’s about turning passive consumers into active participants in the AI revolution. Governments and Companies Advancing Democratic AI While the concept is still evolving, several governments and companies are dipping their toes, or even diving headfirst, into the waters of Democratic AI: The European Union: With its emphasis on digital sovereignty and open-source initiatives, the EU actively promotes a more democratic and human-centric approach to AI development and regulation. Various Open-Source AI Initiatives: Projects like Hugging Face, with its open platform for models and datasets, and initiatives around open data for AI training, embody the spirit of Democratic AI. Decentralized AI Platforms: Emerging projects are exploring blockchain and other decentralized technologies to create more open and community-governed AI infrastructure. Government-Backed Open AI Research: Some governments are supporting open research efforts to promote collaboration and transparency in AI development. For example, Canada funds its CIFAR AI Chairs program, and the U.K. advances similar goals through the Alan Turing Institute. Benefits: A Brighter Algorithmic Future Aggressively pursuing Democratic AI could deliver transformative results: More Ethical and Fair AI Systems: Open scrutiny and diverse participation can help mitigate biases embedded in data and algorithms, leading to fairer and more equitable AI outcomes. AI Tailored to Diverse Needs: A decentralized and collaborative approach can foster the development of AI solutions that address the specific needs and contexts of diverse communities and cultures. Greater Public Trust in AI: Transparency and participatory governance can build greater public trust in AI systems, fostering wider adoption and acceptance. Accelerated Solutions to Global Challenges: By harnessing the collective intelligence of a global community, Democratic AI can accelerate the development of solutions to pressing global challenges, from climate change to health care. Where Democratic AI Stands Today The concept of Democratic AI is still in its relatively early stages. While the principles of open source have a long history in software development, applying them comprehensively to the complex world of AI — including data, models, and governance — is a more recent endeavor. We are likely in the “seed” or early “sapling” phase of this movement. While there are promising initiatives and growing awareness, widespread adoption and the establishment of robust Democratic AI ecosystems will take time, research, and concerted effort from individuals, organizations, and governments. Wrapping Up: A People-Led AI Future Democratic AI offers a compelling vision for the future of artificial intelligence, one where power is distributed, innovation is accelerated through collaboration, and the technology serves humanity’s broader interests. While the path to realizing this vision is still unfolding, the principles of openness, transparency, and participatory governance hold immense promise. As we navigate the transformative power of AI, embracing a democratic approach might not just be the best way forward; it might be the only way to ensure that this powerful technology truly benefits all of us, here in Bend and across the interconnected world. The seeds of a people’s AI are being planted, and it’s up to us to cultivate a future where everyone shares its fruits. Slate Electric Pickup – Bezos-Backed Bargain With a Twist The buzz around the Slate electric pickup truck has been palpable, and after its recent unveiling, it’s clear why. This no-nonsense EV, with ties to Amazon founder Jeff Bezos through his investment in the parent company Re:Build Manufacturing, is making waves with its incredibly aggressive starting price of just In a market where electric trucks often flirt with six-figure sums, Slate is aiming for the heart of the value-conscious buyer. What makes the Slate EV truly intriguing is its innovative modular construction. Reportedly, the pickup can be converted into a compact SUV with a relatively inexpensive kit. This transformative capability addresses the reality of how many pickup owners actually use their vehicles. For the vast majority, the truck bed often sits empty or carries lighter loads, while the need for passenger space and enclosed cargo for family and everyday life is more frequent. Slate’s ability to morph into an SUV offers unparalleled versatility, essentially providing two vehicles in one at an exceptionally low total cost of ownership. Slate’s price point makes it especially appealing to buyers focused on practicality and value. At it’s positioned to be one of the most affordable EVs on the market, let alone an electric truck. This price point opens electric mobility to buyers who were previously priced out of the EV market. Considering that most pickup truck owners primarily use their vehicles for commuting, errands, and occasional light hauling, Slate’s EV core functionality likely meets their needs without the excess capacity and exorbitant cost of larger, more powerful trucks. Adding to its affordability, Slate’s truck is reportedly still eligible for the U.S. federal tax credit for electric vehicles, currently as much as For those who qualify, this effectively brings the starting price down to a mere making it an absolute steal and a compelling alternative to many gasoline-powered used vehicles. While specific performance figures are still emerging, early reports suggest a respectable range suitable for daily driving, a powertrain adequate for typical truck duties, and the added weight of its modular components. Its focus on affordability likely means it won’t boast the blistering acceleration of high-end EVs, but it promises a practical and efficient driving experience. The Slate electric pickup, with its Bezos connection, groundbreaking modularity, and incredibly low price, could very well be the value king of the electric truck revolution. The clear value of this Slate pickup — and the fact that it’s likely giving Elon Musk nightmares — makes it my Product of the Week. Credit: The Slate pickup truck images are courtesy of Slate. #democratic #revolution #power #people #code34 Comments ·0 Shares ·0 Reviews -
Apple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire
Apple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire
By John P. Mello Jr.
May 14, 2025 5:00 AM PT
ADVERTISEMENT
Rubrik Foward 2025: The Future of Cyber Resilience is here
When an attacker comes for your business, will you be ready? Chart your path to cyber resilience and keep your business running. June 4 | Virtual Event | Register Now
Among a raft of upcoming accessibility tools revealed Tuesday, Apple announced a new protocol for brain-to-computer interfaceswithin its Switch Control feature. The protocol allows iOS, iPadOS, and visionOS devices to support an emerging technology that enables users to control their digital hardware without physical movement.
One of the first companies to take advantage of the new protocol will be New York-based Synchron. “This marks a major milestone in accessibility and neurotechnology, where users implanted with Synchron’s BCI can control iPhone, iPad, and Apple Vision Pro directly with their thoughts without the need for physical movement or voice commands,” the company said in a statement.
It added that Synchron’s BCI system will seamlessly integrate with Apple’s built-in accessibility features, including Switch Control, giving users an intuitive way to use their devices and laying the foundation for a new generation of cognitive input technologies.
“This marks a defining moment for human-device interaction,” Synchron CEO and Co-Founder Tom Oxley said in a statement. “BCI is more than an accessibility tool, it’s a next-generation interface layer.”
“Apple is helping to pioneer a new interface paradigm, where brain signals are formally recognized alongside touch, voice, and typing,” he continued. “With BCI recognized as a native input for Apple devices, there are new possibilities for people living with paralysis and beyond.”
BCI Validation
Tetiana Aleksandrova, CEO of Subsense, a biotechnology company in Covina, Calif., specializing in non-surgical bidirectional brain-computer interfaces, maintained Apple’s announcement marks a powerful signal showing the evolution of BCI. “I see it as Apple throwing open the gates — a single-stroke move that invites clinically-validated BCIs like Synchron’s Stentrode to plug straight into a billion-device ecosystem,” she told TechNewsWorld.
“For patients, it means ‘mind-to-message’ control without middleware,” she said. “For the BCI industry, it’s a public stamp that neural input is ready for prime time — and yes, that’s a thrilling milestone for all of us building the next generation of non-surgical systems. It’s a shift, moving BCI from a nascent technology to more mainstream applications.”
Aleksandrova maintained that BCI fits nicely into Apple’s overall accessibility strategy.
“Apple’s playbook is to solve an extreme edge case, polish the UX until it’s invisible, then let the rest of the world adopt it,” she explained. “VoiceOver paved the way for Siri. Switch Control turned into Face Gestures. BCI support is the natural next rung on that ladder. Accessibility isn’t a side quest for Apple — it’s the R and D lab that future-proofs their core UI.”
“Apple devices put unlimited information at users’ fingertips,” she added, “but for people with disabilities from TBI or ALS, full access can be out of reach. BCI technology helps bridge that gap, giving them a way to control and interact with their devices using only their brain activity.”
Analysts See BCI as Long-Term Technology
Apple’s embrace of BCI is significant, but its impact still lies in the future, noted Will Kerwin, technology equity analyst with Morningstar Research Services in Chicago. “While a particularly cool announcement, we think this type of feature is a long way away from full commercialization and not material to investors in Apple at this point,” he told TechNewsWorld.
Kerwin pointed out that Synchron’s Stentrode BCI currently only has a sample size of 10 people.
“Long-term, yes, this technology could have huge implications for how humans interact with technology, and we see it as an adjacency to AI, where generative AI could help improve the interface and ability for humans to communicate via the implant,” he said. “But again, we see this as an extremely long-term journey in its nascent days.”
According to the Wall Street Journal, FDA approval of Synchron’s Stenrode technology is years away. The procedure involves implanting a stent-mounted electrode array into a blood vessel in the brain, so there’s no need for open brain surgery.
“Some companies in the BCI space are focused on cortical control of prosthetics, others on cognitive enhancement or memory restoration,” Synchron spokesperson Kimberly Ha told TechNewsWorld. “What sets us apart is our focus on scalability and safety. By implanting via the blood vessels, we avoid open brain surgery, making our approach more feasible for potentially broader medical use.”
Solving the BCI Scalability Problem
Ha acknowledged that there are significant challenges to the broad adoption of BCI. “Scalability is one of the biggest,” she said.
“Historically, many BCI systems have required open brain surgery, which presents serious risks and limits to who can access the technology,” she explained. “It’s simply not scalable for widespread clinical or consumer use.”
“Synchron takes a fundamentally different approach,” she continued. “Our Stentrode device is implanted via the blood vessels, similar to a heart stent, avoiding the need to open the skull or directly penetrate brain tissue. This makes the procedure far less invasive, more accessible to patients, and better suited to real-world clinical deployment.”
There are also challenges to developing the BCI apps themselves. “The biggest challenge in developing BCI applications is the trade-off between signal quality and accessibility,” Aleksandrova explained. “While a directly implanted BCI offers strong brain signals, surgery is risky. With non-invasive systems, the resolution is poor.”
Her company, Subsense, is trying to offer the best of both worlds through the use of nanoparticles, which can provide bidirectional communication by crossing the blood-brain barrier to interact with neurons and transmit signals.
Thought-Driven Interfaces for New Use Cases
Ha noted that in addition to medical applications, BCI could be used for hands-free device control across all types of digital platforms, neuroadaptive gaming, or immersive XR experiences.
“BCI opens doors to applications in mental wellness, communication tools, and cognitive enhancement,” Aleksandrova added.
“You’ll fire off texts, browse AR screens, or write code just by thinking,” she said. “You’ll slip seamlessly into a drone or handle a surgical robot as though it were your own hand, and nudge your smart home with a silent impulse that dims the lights when focus peaks.”
“Entertainment will read the room inside your head — dialing a game’s difficulty or a film’s plot to match your mood — while always-on neural metrics warn of fatigue, migraines, or anxiety hours before you notice and even surface names or ideas when your memory stalls,” she predicted. “Your unique brainwave ‘fingerprint’ will replace passwords, and researchers are already sketching ways to preserve those patterns so our minds can outlast failing bodies.”
“I’m genuinely proud of Synchron and Apple for opening this door,” she said.
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
More by John P. Mello Jr.
view all
More in Science
#apple #adds #braintocomputer #protocol #itsApple Adds Brain-to-Computer Protocol to Its Accessibility RepertoireApple Adds Brain-to-Computer Protocol to Its Accessibility Repertoire By John P. Mello Jr. May 14, 2025 5:00 AM PT ADVERTISEMENT Rubrik Foward 2025: The Future of Cyber Resilience is here When an attacker comes for your business, will you be ready? Chart your path to cyber resilience and keep your business running. June 4 | Virtual Event | Register Now Among a raft of upcoming accessibility tools revealed Tuesday, Apple announced a new protocol for brain-to-computer interfaceswithin its Switch Control feature. The protocol allows iOS, iPadOS, and visionOS devices to support an emerging technology that enables users to control their digital hardware without physical movement. One of the first companies to take advantage of the new protocol will be New York-based Synchron. “This marks a major milestone in accessibility and neurotechnology, where users implanted with Synchron’s BCI can control iPhone, iPad, and Apple Vision Pro directly with their thoughts without the need for physical movement or voice commands,” the company said in a statement. It added that Synchron’s BCI system will seamlessly integrate with Apple’s built-in accessibility features, including Switch Control, giving users an intuitive way to use their devices and laying the foundation for a new generation of cognitive input technologies. “This marks a defining moment for human-device interaction,” Synchron CEO and Co-Founder Tom Oxley said in a statement. “BCI is more than an accessibility tool, it’s a next-generation interface layer.” “Apple is helping to pioneer a new interface paradigm, where brain signals are formally recognized alongside touch, voice, and typing,” he continued. “With BCI recognized as a native input for Apple devices, there are new possibilities for people living with paralysis and beyond.” BCI Validation Tetiana Aleksandrova, CEO of Subsense, a biotechnology company in Covina, Calif., specializing in non-surgical bidirectional brain-computer interfaces, maintained Apple’s announcement marks a powerful signal showing the evolution of BCI. “I see it as Apple throwing open the gates — a single-stroke move that invites clinically-validated BCIs like Synchron’s Stentrode to plug straight into a billion-device ecosystem,” she told TechNewsWorld. “For patients, it means ‘mind-to-message’ control without middleware,” she said. “For the BCI industry, it’s a public stamp that neural input is ready for prime time — and yes, that’s a thrilling milestone for all of us building the next generation of non-surgical systems. It’s a shift, moving BCI from a nascent technology to more mainstream applications.” Aleksandrova maintained that BCI fits nicely into Apple’s overall accessibility strategy. “Apple’s playbook is to solve an extreme edge case, polish the UX until it’s invisible, then let the rest of the world adopt it,” she explained. “VoiceOver paved the way for Siri. Switch Control turned into Face Gestures. BCI support is the natural next rung on that ladder. Accessibility isn’t a side quest for Apple — it’s the R and D lab that future-proofs their core UI.” “Apple devices put unlimited information at users’ fingertips,” she added, “but for people with disabilities from TBI or ALS, full access can be out of reach. BCI technology helps bridge that gap, giving them a way to control and interact with their devices using only their brain activity.” Analysts See BCI as Long-Term Technology Apple’s embrace of BCI is significant, but its impact still lies in the future, noted Will Kerwin, technology equity analyst with Morningstar Research Services in Chicago. “While a particularly cool announcement, we think this type of feature is a long way away from full commercialization and not material to investors in Apple at this point,” he told TechNewsWorld. Kerwin pointed out that Synchron’s Stentrode BCI currently only has a sample size of 10 people. “Long-term, yes, this technology could have huge implications for how humans interact with technology, and we see it as an adjacency to AI, where generative AI could help improve the interface and ability for humans to communicate via the implant,” he said. “But again, we see this as an extremely long-term journey in its nascent days.” According to the Wall Street Journal, FDA approval of Synchron’s Stenrode technology is years away. The procedure involves implanting a stent-mounted electrode array into a blood vessel in the brain, so there’s no need for open brain surgery. “Some companies in the BCI space are focused on cortical control of prosthetics, others on cognitive enhancement or memory restoration,” Synchron spokesperson Kimberly Ha told TechNewsWorld. “What sets us apart is our focus on scalability and safety. By implanting via the blood vessels, we avoid open brain surgery, making our approach more feasible for potentially broader medical use.” Solving the BCI Scalability Problem Ha acknowledged that there are significant challenges to the broad adoption of BCI. “Scalability is one of the biggest,” she said. “Historically, many BCI systems have required open brain surgery, which presents serious risks and limits to who can access the technology,” she explained. “It’s simply not scalable for widespread clinical or consumer use.” “Synchron takes a fundamentally different approach,” she continued. “Our Stentrode device is implanted via the blood vessels, similar to a heart stent, avoiding the need to open the skull or directly penetrate brain tissue. This makes the procedure far less invasive, more accessible to patients, and better suited to real-world clinical deployment.” There are also challenges to developing the BCI apps themselves. “The biggest challenge in developing BCI applications is the trade-off between signal quality and accessibility,” Aleksandrova explained. “While a directly implanted BCI offers strong brain signals, surgery is risky. With non-invasive systems, the resolution is poor.” Her company, Subsense, is trying to offer the best of both worlds through the use of nanoparticles, which can provide bidirectional communication by crossing the blood-brain barrier to interact with neurons and transmit signals. Thought-Driven Interfaces for New Use Cases Ha noted that in addition to medical applications, BCI could be used for hands-free device control across all types of digital platforms, neuroadaptive gaming, or immersive XR experiences. “BCI opens doors to applications in mental wellness, communication tools, and cognitive enhancement,” Aleksandrova added. “You’ll fire off texts, browse AR screens, or write code just by thinking,” she said. “You’ll slip seamlessly into a drone or handle a surgical robot as though it were your own hand, and nudge your smart home with a silent impulse that dims the lights when focus peaks.” “Entertainment will read the room inside your head — dialing a game’s difficulty or a film’s plot to match your mood — while always-on neural metrics warn of fatigue, migraines, or anxiety hours before you notice and even surface names or ideas when your memory stalls,” she predicted. “Your unique brainwave ‘fingerprint’ will replace passwords, and researchers are already sketching ways to preserve those patterns so our minds can outlast failing bodies.” “I’m genuinely proud of Synchron and Apple for opening this door,” she said. John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. More by John P. Mello Jr. view all More in Science #apple #adds #braintocomputer #protocol #its0 Comments ·0 Shares ·0 Reviews -
AI Is Rewriting the Rules of Brand Management
AI Is Rewriting the Rules of Brand Management
By John P. Mello Jr.
May 13, 2025 5:00 AM PT
ADVERTISEMENT
Build HubSpot Apps, Faster
New developer products preview the future of app building on HubSpot, including deeper extensibility, flexible UI, modern prototyping tools, and more. Learn More.
Although building trust through a carefully crafted brand message is still important, artificial intelligence may be undermining its traditional influence.
“AI isn’t just helping businesses create content or automate tasks; it’s empowering individuals to become instant digital detectives,” Mike Allton, chief storyteller at Agorapulse, a social media management platform for businesses, wrote Monday on LinkedIn.
What that means, he explained, is a company’s entire digital history — reviews, articles, social media sentiment, even employee feedback — is now more transparent and instantly “queryable” than ever before. “The carefully crafted brand message? It’s still important, but AI can now cross-reference it with raw, aggregated public data in seconds,” he noted.
Edwin Miller, CEO of Marchex, a conversation intelligence platform maker headquartered in Seattle, explained that the rise of large language models and real-time data analytics has effectively turned a company’s full digital footprint into a searchable, easy-to-interpret, and evaluative source of truth.
“We’re entering a world where a company’s entire identity, how it treats customers, how it responds to criticism, what employees really think, and how well it delivers on its promises, can be surfaced instantly by AI,” he told TechNewsWorld. “And not just by researchers or journalists, but by consumers, investors, and competitors.”
“This means companies no longer control the brand narrative the way they used to,” he said. “The narrative is now co-authored by customers, employees, and digital observers, with AI acting as a kind of omnipresent interpreter. That changes the playing field for brand management entirely.”
AI Shrinks Trust-Building to Milliseconds
Mark N. Vena, president and principal analyst for SmartTech Research in Las Vegas, argued that brand management is a “huge deal” in the AI age. “Brand management is no longer just about campaigns — it’s about constantly monitoring and reacting to a living, breathing digital footprint,” he told TechNewsWorld.
“Every customer interaction, review, or leaked internal memo can instantly shape public perception,” he said. “That means brand managers must be part storyteller, part crisis manager, and fully agile. The brand isn’t what you say it is — it’s what the internet says it is.”
Allton noted that AI’s capability to “vet” or “audit” is a powerful reminder that, as AI is integrated into businesses, they must also consider how the external AI ecosystem perceives them. “It’s no longer enough to say you’re trustworthy; the data must reflect it because that data is now incredibly accessible and interpretable by AI,” he wrote.
“Trust used to be built over years and could be lost in moments,” added Lizi Sprague, co-founder of Songue PR, a public relations agency in San Francisco. “Now, with AI, trust can be verified in milliseconds. Every interaction, review, and employee comment becomes part of your permanent trust score.”
She told TechNewsWorld: “AI isn’t replacing reputation managers or comms people; it’s making them more crucial than ever. In an AI-driven world, reputation management evolves from damage control to proactive narrative architecture.”
Proactive Transparency
Brand managers will also need to be more proactive. They need to pay attention to how their brand is represented in the most popular AI tools.
“Brands should be conducting searches that test the way their reputation is represented or conveyed in those tools, and they should be paying attention to the sources that are referenced by AI tools,” said Damian Rollison, director of market insights at SOCi, a marketing solutions company in San Diego.
“If a company focuses a lot on local marketing, they should be paying attention to reviews of a business in Google, Yelp, or TripAdvisor — those kinds of sources — all of which are heavily cited by AI,” he told TechNewsWorld.
“If they’re not paying attention to those reviews and taking action to respond when consumers offer feedback — apologizing if they had a bad experience, offering some kind of remedy, thanking customers when they give you positive feedback — then they have even more reason than ever to pay attention to those reviews and respond to them now.”
Dev Nag, CEO and founder of QueryPal, a customer support chatbot based in San Francisco, explained that an AI-searchable landscape will create persistent accountability. “Every ethical lapse, broken promise, and controversial statement lives on in digital archives, ready to be surfaced by AI at any moment,” he told TechNewsWorld.
“Companies can leverage this AI-scrutinized environment by embracing proactive transparency,” he said. “Organizations should use AI tools to continuously monitor customer sentiment across vast data streams, gaining early warning of reputation risks and identifying improvement areas before issues escalate into crises.”
New Era of AI-Driven Accountability
Nag recommends conducting regular AI reputation audits, doubling down on authenticity, pursuing strong media coverage in respected outlets, empowering employees as reputation ambassadors, implementing AI monitoring with rapid response protocols, and preparing for AI-driven crises, including misinformation attacks.
Transparency without controls, though, can harm a brand. “Doing reputation management well requires a tight focus on the behavior of those who can affect the appearance of the related firm,” said Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore.
“If more transparency is created without these controls and training in place, coupled with strong execution, monitoring, and a strong crisis team, the outcome is likely to be catastrophic,” he told TechNewsWorld.
“AI is now part of the reputation equation,” added Matthew A. Gilbert, a marketing lecturer at Coastal Carolina University.
“It monitors everything, from customer reviews to employee comments,” he told TechNewsWorld. “Brands should treat it as an early warning system and act before issues escalate.”
AI in Branding Demands Action, Not Panic
Allton argued that the rise of AI as a reputation manager isn’t a cause for alarm but a cause for action. However, it does make some demands on businesses. They include:
Non-Negotiable Radical Authenticity
If there are inconsistencies between what your brand promises and what the public data reflects, AI-powered searches will likely highlight them. Your operations must genuinely align with your messaging.“Authenticity is no longer a decision made by brands regarding which cards to reveal; instead, it has become an inevitable force driven by the public, as everything will eventually come to light,” said Reilly Newman, founder and brand strategist at Motif Brands, a brand transformation company, in Paso Robles, Calif. “Authenticity is not merely a new initiative for brands,” he told TechNewsWorld. “It is a necessity and an expected element of any company.”
The “AI Response” Is Your New First Impression
For many, the first true understanding of your business might come from an AI-generated summary, Allton noted. What story is the collective data telling about you?Kseniya Melnikova, a marketing strategist with Melk PR, a marketing agency in Sioux Falls, S.D., recalled a client who believed their low engagement was due to a lack of clear marketing materials.
“Using AI to analyze their community feedback, we discovered the real issue was that customers misunderstood who they were,” she told TechNewsWorld. “They were perceived as a retailer when, in fact, they were an insurance fulfillment service. With this insight, we produced fewer — but clearer — materials that corrected the misunderstanding and improved customer outcomes.”
Human Values Still Drive the Core Code
While AIs process the data, the data itself reflects human experiences and actions, Allton explained. Building a trustworthy business rooted in solid ethical practices provides the best input for any AI assessment.Brand Basics
Businesses that stick to fundamentals, though, shouldn’t have to worry about the new unofficial reputation manager. “Companies need to deliver great products and services and back them up with strong support,” asserted Greg Sterling, co-founder of Near Media, a market research firm in San Francisco.
“Marketing is a separate thing, but their core business and the way they treat their customers need to be very solid and reliable,” he told TechNewsWorld. “Marketing and brand campaigns can then be built on top of that fundamental authenticity and ethical conduct, which will be reflected in AI results.”
“I think people get very confused about what makes a successful business, and they’re focused on tips and tricks and marketing manipulation,” he said. “Great marketing is built on great products and services. Great brands are built by delivering great products and services, being consistent, and treating customers well. That’s the core proposition that everything else flows out of.”
John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.
Leave a Comment
Click here to cancel reply.
Please sign in to post or reply to a comment. New users create a free account.
More by John P. Mello Jr.
view all
More in Artificial Intelligence
المصدر: https://www.technewsworld.com/story/ai-is-rewriting-the-rules-of-brand-management-179737.html?rss=1AI Is Rewriting the Rules of Brand ManagementAI Is Rewriting the Rules of Brand Management By John P. Mello Jr. May 13, 2025 5:00 AM PT ADVERTISEMENT Build HubSpot Apps, Faster New developer products preview the future of app building on HubSpot, including deeper extensibility, flexible UI, modern prototyping tools, and more. Learn More. Although building trust through a carefully crafted brand message is still important, artificial intelligence may be undermining its traditional influence. “AI isn’t just helping businesses create content or automate tasks; it’s empowering individuals to become instant digital detectives,” Mike Allton, chief storyteller at Agorapulse, a social media management platform for businesses, wrote Monday on LinkedIn. What that means, he explained, is a company’s entire digital history — reviews, articles, social media sentiment, even employee feedback — is now more transparent and instantly “queryable” than ever before. “The carefully crafted brand message? It’s still important, but AI can now cross-reference it with raw, aggregated public data in seconds,” he noted. Edwin Miller, CEO of Marchex, a conversation intelligence platform maker headquartered in Seattle, explained that the rise of large language models and real-time data analytics has effectively turned a company’s full digital footprint into a searchable, easy-to-interpret, and evaluative source of truth. “We’re entering a world where a company’s entire identity, how it treats customers, how it responds to criticism, what employees really think, and how well it delivers on its promises, can be surfaced instantly by AI,” he told TechNewsWorld. “And not just by researchers or journalists, but by consumers, investors, and competitors.” “This means companies no longer control the brand narrative the way they used to,” he said. “The narrative is now co-authored by customers, employees, and digital observers, with AI acting as a kind of omnipresent interpreter. That changes the playing field for brand management entirely.” AI Shrinks Trust-Building to Milliseconds Mark N. Vena, president and principal analyst for SmartTech Research in Las Vegas, argued that brand management is a “huge deal” in the AI age. “Brand management is no longer just about campaigns — it’s about constantly monitoring and reacting to a living, breathing digital footprint,” he told TechNewsWorld. “Every customer interaction, review, or leaked internal memo can instantly shape public perception,” he said. “That means brand managers must be part storyteller, part crisis manager, and fully agile. The brand isn’t what you say it is — it’s what the internet says it is.” Allton noted that AI’s capability to “vet” or “audit” is a powerful reminder that, as AI is integrated into businesses, they must also consider how the external AI ecosystem perceives them. “It’s no longer enough to say you’re trustworthy; the data must reflect it because that data is now incredibly accessible and interpretable by AI,” he wrote. “Trust used to be built over years and could be lost in moments,” added Lizi Sprague, co-founder of Songue PR, a public relations agency in San Francisco. “Now, with AI, trust can be verified in milliseconds. Every interaction, review, and employee comment becomes part of your permanent trust score.” She told TechNewsWorld: “AI isn’t replacing reputation managers or comms people; it’s making them more crucial than ever. In an AI-driven world, reputation management evolves from damage control to proactive narrative architecture.” Proactive Transparency Brand managers will also need to be more proactive. They need to pay attention to how their brand is represented in the most popular AI tools. “Brands should be conducting searches that test the way their reputation is represented or conveyed in those tools, and they should be paying attention to the sources that are referenced by AI tools,” said Damian Rollison, director of market insights at SOCi, a marketing solutions company in San Diego. “If a company focuses a lot on local marketing, they should be paying attention to reviews of a business in Google, Yelp, or TripAdvisor — those kinds of sources — all of which are heavily cited by AI,” he told TechNewsWorld. “If they’re not paying attention to those reviews and taking action to respond when consumers offer feedback — apologizing if they had a bad experience, offering some kind of remedy, thanking customers when they give you positive feedback — then they have even more reason than ever to pay attention to those reviews and respond to them now.” Dev Nag, CEO and founder of QueryPal, a customer support chatbot based in San Francisco, explained that an AI-searchable landscape will create persistent accountability. “Every ethical lapse, broken promise, and controversial statement lives on in digital archives, ready to be surfaced by AI at any moment,” he told TechNewsWorld. “Companies can leverage this AI-scrutinized environment by embracing proactive transparency,” he said. “Organizations should use AI tools to continuously monitor customer sentiment across vast data streams, gaining early warning of reputation risks and identifying improvement areas before issues escalate into crises.” New Era of AI-Driven Accountability Nag recommends conducting regular AI reputation audits, doubling down on authenticity, pursuing strong media coverage in respected outlets, empowering employees as reputation ambassadors, implementing AI monitoring with rapid response protocols, and preparing for AI-driven crises, including misinformation attacks. Transparency without controls, though, can harm a brand. “Doing reputation management well requires a tight focus on the behavior of those who can affect the appearance of the related firm,” said Rob Enderle, president and principal analyst of the Enderle Group, an advisory services firm in Bend, Ore. “If more transparency is created without these controls and training in place, coupled with strong execution, monitoring, and a strong crisis team, the outcome is likely to be catastrophic,” he told TechNewsWorld. “AI is now part of the reputation equation,” added Matthew A. Gilbert, a marketing lecturer at Coastal Carolina University. “It monitors everything, from customer reviews to employee comments,” he told TechNewsWorld. “Brands should treat it as an early warning system and act before issues escalate.” AI in Branding Demands Action, Not Panic Allton argued that the rise of AI as a reputation manager isn’t a cause for alarm but a cause for action. However, it does make some demands on businesses. They include: Non-Negotiable Radical Authenticity If there are inconsistencies between what your brand promises and what the public data reflects, AI-powered searches will likely highlight them. Your operations must genuinely align with your messaging.“Authenticity is no longer a decision made by brands regarding which cards to reveal; instead, it has become an inevitable force driven by the public, as everything will eventually come to light,” said Reilly Newman, founder and brand strategist at Motif Brands, a brand transformation company, in Paso Robles, Calif. “Authenticity is not merely a new initiative for brands,” he told TechNewsWorld. “It is a necessity and an expected element of any company.” The “AI Response” Is Your New First Impression For many, the first true understanding of your business might come from an AI-generated summary, Allton noted. What story is the collective data telling about you?Kseniya Melnikova, a marketing strategist with Melk PR, a marketing agency in Sioux Falls, S.D., recalled a client who believed their low engagement was due to a lack of clear marketing materials. “Using AI to analyze their community feedback, we discovered the real issue was that customers misunderstood who they were,” she told TechNewsWorld. “They were perceived as a retailer when, in fact, they were an insurance fulfillment service. With this insight, we produced fewer — but clearer — materials that corrected the misunderstanding and improved customer outcomes.” Human Values Still Drive the Core Code While AIs process the data, the data itself reflects human experiences and actions, Allton explained. Building a trustworthy business rooted in solid ethical practices provides the best input for any AI assessment.Brand Basics Businesses that stick to fundamentals, though, shouldn’t have to worry about the new unofficial reputation manager. “Companies need to deliver great products and services and back them up with strong support,” asserted Greg Sterling, co-founder of Near Media, a market research firm in San Francisco. “Marketing is a separate thing, but their core business and the way they treat their customers need to be very solid and reliable,” he told TechNewsWorld. “Marketing and brand campaigns can then be built on top of that fundamental authenticity and ethical conduct, which will be reflected in AI results.” “I think people get very confused about what makes a successful business, and they’re focused on tips and tricks and marketing manipulation,” he said. “Great marketing is built on great products and services. Great brands are built by delivering great products and services, being consistent, and treating customers well. That’s the core proposition that everything else flows out of.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. More by John P. Mello Jr. view all More in Artificial Intelligence المصدر: https://www.technewsworld.com/story/ai-is-rewriting-the-rules-of-brand-management-179737.html?rss=125 Comments ·0 Shares ·0 Reviews -
-
-
-
-
-
0 Comments ·0 Shares ·0 Reviews
-
-
More Stories