0 Комментарии
0 Поделились
31 Просмотры
Каталог
Каталог
-
Войдите, чтобы отмечать, делиться и комментировать!
-
WWW.DIGITALTRENDS.COMWithings blood pressure monitor comes with your own cardiologist insidehtml PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" At CES 2025, Withings has made connected blood pressure monitors a lot easier to use, more reassuring, and more inclusive than ever with the introduction of the BPM Vision. How? In addition to new hardware features, it has the option to add an unusual service thats like having your own private cardiologist on hand, ready to check over your results and warn you about possible heart arrhythmias.Unlike many blood pressure monitors, the BPM Vision is a friendly and modern-looking device. On the front is a high-resolution color screen that shows your blood pressure results, as well as tutorials on how to use it properly, plus reminders about when to take a reading. Additionally, using Withings AI will show immediate, easy-to-understand feedback and insights. Theres an on/off button and three large main buttons next to the screen. The functionality of each is clearly indicated on the products screen, making it simple to use.Recommended VideosWithings existing BPM Connect blood pressure monitor has a fairly standard adjustable cuff, but to make the BPM Vision suitable for more people, the company will make it with two different cuff sizes. Purchase the BPM Vision from Withings website, and youll have the option of a cuff measuring between nine and 17 inches or between 16 and 20 inches. The BPM Vision connects to Wi-Fi and syncs its data with an app on your phone. It has the capacity to track data from eight different people, plus the battery should last for up to six months. It comes in a handy travel case.WithingsWhere is the private cardiologist? If you subscribe to the Withings+ service, the BPM Vision will come with a feature called Cardio Check-Up, where within 24 hours of making a request, youll be connected to a certified health care professional who will analyze your data and watch for signs of more than 10 different heart arrhythmias. Other services that come with Cardio Check-Up include a quarterly check-up, no-appointment-required reviews, and no need to visit a clinic as its all performed through the app. Cardio Check-Up also works with other Withings health products, including the ScanWatch 2 smartwatch.Please enable Javascript to view this contentThe BPM Vision is being certified by the Food and Drug Administration (FDA) and is expected to be ready for sale in the U.S. in April 2025. It will cost $130. The Cardio Check-Up service will be operated by Heartbeat Health in the U.S. and is part of the $100 annual Withings+ subscription service.Editors Recommendations0 Комментарии 0 Поделились 32 Просмотры
-
WWW.DIGITALTRENDS.COMAsus and Gigabyte give us a glimpse of the RX 9000 seriesAMD revealed its next-gen RX 9000 series graphics cards yesterday well, kind of. The cards were mostly a no-show, with nothing but a promise that wed hear more soon. However, AMDs partners still showed off some of the upcoming RX 9070 XT and RX 9070 graphics cards during CES 2025, which is why we now know what theyre going to look like but we still know very little about how theyll perform when matched up against some of the best graphics cards.Despite the lack of specifics during the presentation, Asus announced four RDNA 4 graphics cards with undisclosed release dates. Unfortunately, the only specification we got out of this is that both the RX 9070 and the RX 9070 XT feature 16GB of VRAM, which is a healthy amount that can rival Nvidias $1,000 RTX 5080.The lineup covers TUF and Prime models, and the TUF series offers a dual-BIOS switch. Asus is also replacing thermal paste with phase-changing thermal pads for improved heat dissipation, so these GPUs should stay nice and cool under pressure. However, we still dont know how much power theyre going to consume. As such, its unclear how hot these GPUs will run in the first place. All cards will be factory-overclocked, but the base and maximum clock speeds havent been announced.Recommended VideosBased on the photos, we can also gather that all four GPUs will come with a triple-fan configuration. The Prime model comes with three 8-pin power connectors, and Im assuming the TUF variant will use the same configuration. There are also three DisplayPort 2.1 and one HDMI 2.1 output.Gigabyte also gave the public a glimpse at its upcoming RDNA 4 graphics cards, as shared by TechPowerUp. The company showed off the Radeon RX 9070 XT Elite and the RX 9070 Gaming OC. Theres also the Aorus RX 9070 XT Elite, which features a couple of new thermal management solutions, such as a redesigned Hawk fan. Gigabyte also offers dual BIOS modes.Get your weekly teardown of the tech behind PC gaming During the actual keynote, AMD barely mentioned RDNA 4 and FSR4, but both are on the horizon, most likely launching within this quarter. However, prior to the keynote, the company shared a few slides with the press, which is why we now know that RDNA 4 will offer third-gen ray tracing accelerators, and that the lineup consists of the RX 9070 and the RX 9060. We also know that AMD is sticking to the mainstream market, as the flagship will only rival the last-gen RX 7900 XT.RelatedIts nice to see the upcoming GPUs in the flesh, and this makes me hopeful that it wont be long before AMD tells us more about the RX 9000 series.Editors Recommendations0 Комментарии 0 Поделились 32 Просмотры
-
WWW.DIGITALTRENDS.COMBeatbot reveals futuristic AquaSense 2 Series pool cleaners at CES 2025html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Beatbot The original AquaSense Series was wildly popular when it hit the market in early 2024, and at CES 2025, Beatbot officially revealed its successor, the AquaSense 2 Series. Consisting of three robotic pool cleaners and starting at $1,500, Series 2 models are designed to automate all aspects of pool cleaning. The high-end AquaSense 2 Ultra even incorporates AI technology into the mix, promising a superior clean.Digital Trends received compensation for considering coverage of these products. The brand had no input on the editorial content and did not influence the coverage.AquaSense 2 is the most affordable of the trio at $1,499, yet the three-in-one pool cleaner is still pretty well-rounded. It can clean floors, walls, and the waterline, and can run for up to four hours before needing a recharge. Toss in obstacle detection, four unique cleaning modes, and an array of 16 sensors, and its well-suited for most pools.Recommended VideosStep up to the AquaSense 2 Pro at $2,499, and youll get a handful of additional features. These include the ability to clean the water surface and carry out water clarification tasks. It even benefits from six additional sensors and manual remote navigation so you can help it navigate any tricky sections of your pool.BeatbotThe most impressive robot in the lineup is the AquaSense 2 Ultra though it carries the premium price of $3,450. That makes it one of the most expensive robotic pool cleaners on the market. However, it backs up that eye-watering price tag with plenty of features you wont find anywhere else. Its coolest feature is HybridSense AI Pool Mapping, which uses an array of 27 sensors to clean the surface, waterline, floor, and walls, as well as perform water clarification tasks. Unlike the other two robots, it also features two side brushes for an improved clean.Please enable Javascript to view this contentAll three are designed to work with above-ground and in-ground pools and cover 300 square meters. They can also handle pools of any shape, ensuring theyre a good fit for nearly all shoppers. The AquaSense 2 Ultra is the most exciting of the trio, as its AI capabilities should make it a serious upgrade over the existing AquaSense Pro. Compared to the old AquaSense Pro, it gets you two additional motors and seven additional sensors, along with AI support, and its poised to make a splash when it hits the market on February 10.RelatedIf you cant wait until February, check out the best robotic pool cleaners for a list of alternatives that are available right now.Editors Recommendations0 Комментарии 0 Поделились 31 Просмотры
-
WWW.WSJ.COMMeta Ends Fact-Checking on Facebook, Instagram in Free-Speech PitchCEOMark Zuckerberg, who has been building ties with the incoming Trump administration, said the move was an attempt to restore free expression on its platforms.0 Комментарии 0 Поделились 31 Просмотры
-
WWW.WSJ.COMU.K. Competition Watchdog Prepares to Investigate Tech Giants Under New RulebookThe regulator said it would open investigations into at least three types of tech platforms at the start of this year to work out which tech giants need to obey a new law governing the digital economy.0 Комментарии 0 Поделились 31 Просмотры
-
ARSTECHNICA.COMWidely used DNA sequencer still doesnt enforce Secure BootGOT SECURE BOOT? Widely used DNA sequencer still doesnt enforce Secure Boot A firmware-dwelling bootkit in the iSeq 100 could be a key win for threat actors. Dan Goodin Jan 7, 2025 9:00 am | 8 A woman placing her finger on the touch screen of the iSeq 100 gene sequencer from Illumina. Credit: Illumina A woman placing her finger on the touch screen of the iSeq 100 gene sequencer from Illumina. Credit: Illumina Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreIn 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect Windows devices against the threat of malware that could infect the BIOS and, later, its predecessor the UEFI, the firmware that loaded the operating system each time a computer booted up.Firmware-dwelling malware raises the specter of malware that infects the devices before the operating system even loads, each time they boot up. From there, it can remain immune to detection and removal. Secure Boot uses public-key cryptography to block the loading of any code that isnt signed with a pre-approved digital signature.2018 calling for its BIOSSince 2016, Microsoft has required all Windows devices to include a strong trusted platform module that enforces Secure Boot. To this day organizations widely regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments.Microsoft has a much harder time requiring Secure Boot to be enforced on specialized devices, such as scientific instruments used inside research labs. As a result, gear used in some of the world's most sensitive environments still doesn't enforce it. On Tuesday, researchers from firmware security firm Eclypsium called out one of them: the Illumina iSeq 100, a DNA sequencer that's a staple at 23andMe and thousands of other gene-sequencing laboratories around the world.The iSeq 100 can boot from a Compatibility Support Mode so it works with older legacy systems, such as 32-bit OSes. When this is the case, the iSeq loads from BIOS B480AM12, a version that dates to 2018, and Windows 10 2016 LTSB. Both harbor years' worth of critical vulnerabilities that can be exploited to carry out the types of firmware attacks Secure Boot envisioned.Additionally, Eclypsium said, firmware Read/Write protections aren't enabled, meaning an attacker is free to modify the firmware on the device.Eclypsium wrote:It should be noted that our analysis was limited specifically to the iSeq 100 sequencer device. However, the issue is likely much more broad than this single model of device. Medical device manufacturers tend to focus on their unique area of expertise (e.g. gene sequencing) and rely on outside suppliers and services to build the underlying computing infrastructure of the device. In this case, the problems were tied to an OEM motherboard made by IEI Integration Corp. IEI develops a wide range of industrial computer products and maintains a dedicated line of business as an ODM for medical devices. As a result, it would be highly likely that these or similar issues could be found either in other medical or industrial devices that use IEI motherboards. This is a perfect example of how mistakes early in the supply chain can have far reaching impacts across many types of devices and vendors.In an email, Eclypsium CTO Alex Bazhaniuk wrote: "To be fair, with an OS that does not get the most recent security updates, there are plenty of risks and threats, not to mention how each IT organization manages their own assets on their network."He added: "Although we dont have additional examples in the land of DNA Sequencers, it is highly likely that Secure Boot is disabled on devices besides this one from Illumina. Many medical devices are built on off-the-shelf servers and older configurations which may not have Secure Boot enabled or are running outdated firmware, as in many cases it is very hard or impossible to update."Illumina representatives thanked Eclypsium for the research and said that the iSeq 100 follows best security practices. "We are following our standard processes and will notify impacted customers if any mitigations are required," they wrote. "Our initial evaluation indicates these issues are not high-risk."When Secure Boot was first dreamed up, the threat of BIOS-based rootkit was a theoretical risk based on plausible proof-of-concepts such as the ICLord BIOS kit from 2007. In 2011, such threats became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild. Real-world instances of other malware targeting the UEFI since then include the LoJax and MosaicRegressor firmware implants.The ability to create similar infections on one of the most widely used gene sequencers could be a golden opportunity for threat actors. Ransomware groups could use one to take out all devices in a given network. Researchers have also shown how malware can cause sequencers to report false relations between arbitrary users on GEDmatch.Dan GoodinSenior Security EditorDan GoodinSenior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82. 8 Comments0 Комментарии 0 Поделились 30 Просмотры
-
WWW.INFORMATIONWEEK.COMY2K and Infrastructure Resilience 25 Years LaterWhat have we learned about maintaining IT infrastructure and cybersecurity one generation after putting in the work to correct the Y2K bug?0 Комментарии 0 Поделились 27 Просмотры
-
WWW.INFORMATIONWEEK.COMWhy Most Agentic Architectures Will FailLisa Morgan, Freelance WriterJanuary 7, 20259 Min ReadValerii Egorov via Alamy StockAgentic artificial intelligence is expected to have a major impact because it can execute complex tasks autonomously. For now, the hype is outstripping successful implementations, and there are a lot of reasons for that.In 2024, AI agents have become a marketing buzzword for many vendors. However, for user organizations, agents have been an area of early curiosity and experimentation, with actual implementations being far and few, says Leslie Joseph, principal analyst at Forrester. We expect this to change in 2025 as the technology and the ecosystem mature. However, our prediction offers a cautionary note.Joseph says organizations attempting to build AI agents are failing for three main reasons: a poorly scoped vision for agentic workflows, a poor technical solution, and a lack of focus on change management.A poorly scoped vision for agentic workflows results in either a too broad or narrow bounding box for agent functionality, says Joseph. Too narrow a scope may render the problem as solvable by a deterministic workflow, while too broad a problem might introduce too much variability. Agent builders should ask themselves how best to define the business problem they are trying to solve, and where an AI agent fits into this scope.Related:Second, its early days. Agents are still very early-stage applications, and the ecosystem, including agentic tooling, is less evolved than one might expect.While many vendors message around the ease-of-use and drag-drop nature of their agent builder platforms, the fact is that there is still a lot of engineering needed under the hood to deliver a robust enterprise solution, which requires strong technical skills, says Joseph.Finally, a lack of focus on change management isnt helping. Organizations need to understand how the agentic workflow fits into or enhances existing processes and being proactive about managing change. The invention of LLMs was like the discovery of the brick, says Joseph. With agents, we are now figuring out how to put these bricks together to construct homes and cities and skyscrapers. Every enterprise will need to identify what their desired level of autonomy is, and how to build towards that using AI agents. Leslie Joseph, ForresterHe expects the short-term benefits to be process improvement and productivity, but over the longer term, enterprises should be ready for agents to create disruptions across the tech stack. For now, companies should embrace AI agents and agentic workflows, given its disruptive potential.Related:Start investing in experiments and allocating budgets towards proofs-of-concept. Ensure that your teams learn along the way rather than outsourcing everything to an ISV or tech vendor, because these learnings will be crucial down the road, says Joseph.Multi-Agent Workflows Are ChallengingWhen establishing a multi-agent workflow, there are three primary challenges businesses face, according to Murali Swaminathan, CTO at software company Freshworks. First, its incredibly difficult to make workflows predictable in a world that is unstructured and conversational. Second, even complex reasoning in workflows can be prescriptive and hard to achieve reliably. Third, continuous evaluation of these workflows is necessary to measure, and ultimately realize efficacy.[E]nterprises must establish clear approaches on what workflows or problems they want the agentic systems to solve, says Swaminathan. Additionally, its critical that they develop a clear plan on how they will gauge success. This approach will ensure that expectations are measured, and that a strategy of progress over perfection is employed.Over the short term, enterprises will most likely achieve task-based goals related to the employee and agent. Over the long term, business benefits should follow, along with insights about what the business should and should not do.Related:[C]reate a clear game plan on how to implement, utilize, and measure the success of agentic architectures, says Swaminathan. Failing to plan is planning to fail."Insufficient Infrastructure and Data GovernanceWhen it comes to agentic architectures, infrastructure and data governance matter greatly."Without the right infrastructure and data governance in place, agentic architectures struggle to handle the complexity, scale, and interoperability needed for successful implementation, says Doug Gilbert, CIO and chief digital officer at experience-driven digital transformation partner Sutherland Global. Companies should focus on building a strong digital core that can handle the high demands of AI, from data processing to seamless integration with hybrid or multi-cloud environments. This not only allows organizations to scale AI capabilities efficiently but also ensures the flexibility to adapt as systems evolve.Equally important is a well-defined data strategy. Whether leveraging a hybrid, private, or multi-cloud approach, secure and accessible data is essential for building robust AI solutions, ensuring compliance and security across the board.Interconnectivity MattersInteracting with other systems designed for humans is much harder for agentic AI to do than it seems.Making RPA [Robotic Process Automation] nearly 100% reliable took 12-plus years. And thats carefully hard coded to interact with human operated systems across the web and Windows. So, we see these people suggesting that they can get an LLM to do the same and it turns out [to be] quite unreliable, says Kevin Surace, chairman andCTO at autonomous testing platform Appvance. People will be disappointed when the agent thinks it did everything right, but you later find that payment never went out.Despite the fact humans dont get everything right, people expect agentic AI outcomes to be 100% accurate. As an accuracy benchmark, Surace suggests setting the accuracy goal as high as RPA or well-trained humans.Anyone can demo a simple action a few times, says Surace. But doing complex tasks with variability a thousand times without failure -- then you have a product people want.Orchestration Can Be TrickyOrchestration involves end-to-end harmonization of outputs from multiple agents, delivering a unified and comprehensive resolution to the users query.A key of the agentic AI architecture is its capability to organize agents logically by functional domains such as IT, HR, engineering, and more. This structured approach empowers enterprises to deploy specialized agents tailored to the unique requirements of each department, says Abhi Maheshwari, CEO of agentic AI provider Aisera. By categorizing agents based on their functional areas, organizations can optimize workflows, improve task precision, and ensure that each agent operates within its area of expertise for maximum effectiveness.Otherwise, it may be tempting to over-rely on generic models when domain-specific expertise is necessary for handling complex tasks.Enterprises should adopt a structured approach to agentic architectures by starting with logical domain separation to address specific departmental needs, says Maheshwari. Then there needs to be integration with existing systems. If not, there is not much value with agentic AI. After all, this technology is about automated processes and tasks.Tom Taulli, author of Building Generative AI Agents: Using LangChain, LangGraph, and AutoGen,says that agents struggle to handle novel situations or inputs outside their training data.Tom Taulli, AuthorFailures can also arise from misaligned goals or insufficient oversight, says Taulli. Overly autonomous systems without proper guardrails might make decisions that conflict with user intentions, ethical standards or operational objectives.Ultimately, enterprises need highly qualified data scientists because agentic AI is complicated, and the field is constantly evolving.In the short term, Taulli expects agentic AI to replace RPA since both automate tedious and repetitive processes. However, RPA is highly constrained, which means if a process changes materially, then the bot can break.[Agentic AI] should be able to adapt and evolve, without a lot of human intervention and programming or scripting. This makes the automations more maintainable and scalable. In the long-term, I think AI agents could start replacing a large part of what some employees do. This is more about the world when AGI starts to emerge. That is, a super intelligent system will be able to act like a human and have its own agency.Governance Is KeyData quality and quantity are crucial for training agentic AI models, though biases in the data can lead to biased and unfair outcomes while ethical considerations and regulatory challenges surrounding AI development and deployment can hinder progress and lead to unintended consequences.It's essential to establish robust governance frameworks to ensure ethical AI development and deployment, says Matthew Hawkins, chief technology officer at healthcare AI solution provider CaryHealth. Additionally, collaboration with domain experts is crucial to align AI solutions with real-world needs.Monitoring Is Non-OptionalThe only way to tell if an AI model is working properly is to monitor it continuously. Otherwise, companies run the risk of using models that have drifted, for example.When we look at AI agents as a system, the lack of human supervision can create devastating failure cascades throughout the entire network of agent, says Daniel Clydesdale-Cotter, CIO at technology services provider EchoStor. Domain-specific LLMs won't be good enough to replicate business process workflows without individual agents being thoroughly tested and optimized to remove hallucination behavior. The black box nature of LLMs adds another layer of complexity, as audit and compliance of operations within each agent can be very difficult to provide.Organizations must focus on training and culture change to promote responsible use of generative AI for baseline processes.Its crucial to experiment, use, and test with existing workflows before integrating agents, always maintaining human oversight, says Clydesdale-Cotter. Organizations must also monitor their AI environment closely, aware of outputs and behavior within the system. Success depends on aligning data, goals and objectives with the usage patterns of the environment.Success starts with human oversight and a defined MLOps plan, he says. Organizations should partner with companies that are building agents specifically for their domain requirements. However, they must also pay thorough attention to workflows to determine application and software integration viability.Enterprises should approach AI with a best-fit mindset, understanding that not all processes must be AI-augmented or automated, says Clydesdale-Cotter. Being use-case specific helps avoid scope creep and maintains focus on the features youre trying to extract. Well see continued process improvement through human oversight of macro interactions between AI agents and unsupervised optimizations of micro-processes within AI agents themselves.Many Organizations Just Arent ReadyAI has become a strategic priority in many organizations, but business leaders arent sure where to apply AI to solve day-to-day business problems and implement use cases at enterprise scale.Under the hood, the challenge is that, even though business objectives, activities and metrics are deeply interwoven, the software systems used by disparate teams are not, and this creates problems, says Babak Hodjat, CTO of AI at multinational information technology services and consulting company Cognizant Technology Solutions. This is a big reason why weve seen most AI use cases to date limited to prediction-based outcomes or single LLM chat-based solutions."Organizing overall technology and AI strategies around the core tenet of multi-agent systems and decision-making will best enable enterprises to succeed, he says.LLMs are very good at specialized tasks, but embracing multi-agent architectures is what will truly reshape industries, as agents gain the ability to communicate with each other, says Hodjat The future will be about companies having agents in their devices and applications that can address needs and interact with other agents. These agents will work across entire businesses to assist humans in every role, from HR and finance to marketing and sales.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports0 Комментарии 0 Поделились 29 Просмотры
-
WWW.NEWSCIENTIST.COMAI helps radiologists spot breast cancer in real-world testsRadiologists can benefit from AI assistanceAmelie Benoist/BSIP/Universal Images Group via GettyArtificial intelligence models really can help spot cancer and reduce doctors workload, according to the largest study of its kind. Radiologists who chose to use AI were able to identify an extra 1 in 1000 cases of breast cancer.Alexander Katalinic at the University of Lbeck, Germany, and his colleagues worked with almost 200 certified radiologists to test an AI trained to identify signs of breast cancer from mammograms. The radiologists examined 461,818 women across 12 breast cancer screening sites in Germany between July 2021 and February 2023, and for each person could choose whether or not to use AI. This resulted in 260,739 being checked by AI plus a radiologist, with the remaining 201,079 patients checked by a radiologist alone. AdvertisementThose who elected to use AI successfully detected breast cancer at a rate of 6.7 instances in every 1000 scans 17.6 per cent higher than the 5.7 per 1000 scans among those who chose not to use AI. Similarly, when women underwent biopsies following a suspected diagnosis of cancer, those who were diagnosed with AI were 64.5 per cent likely to have a biopsy where cancerous cells were found, compared with 59.2 per cent of the women where AI wasnt used.The scale at which AI improved detection of breast cancer was extremely positive and exceeded our expectations, said Katalinic in a statement. We can now demonstrate that AI significantly improves the cancer detection rate in screening for breast cancer.The goal was to show non-inferiority, says Stefan Bunk at Vara, an AI company also involved in the study. If we can show AI is not inferior to radiologists, thats an interesting scenario to save some workload. We were surprised we were able to show superiority. Get the most essential health and fitness news in your inbox every Saturday.Sign up to newsletterOver-reliance on AI in medicine has worried some because of the risk it could miss some signs of a condition, or could lead to a two-track system of treatment where those who can pay are afforded the luxury of human interaction. There was some evidence that radiologists spent less time examining scans that AI had already suggested were normal meaning cancer wasnt likely to be present reviewing them for an average of 16 seconds, compared with 30 seconds on those that the AI couldnt classify. But these latest findings have been welcomed by those specialising in the safe deployment of AI in medicine.The study offers further evidence for the benefits of AI in breast screening and should be yet another wake-up call for policymakers to accelerate AI adoption, says Ben Glocker at Imperial College London. Its results confirm what we have been seeing again and again: with the right integration strategy, the use of AI is both safe and effective.He welcomes the way the study allowed radiologists to make their own decisions about when to use AI, and would like to see more tests of AI performed in a similar way. We cannot easily assess this in the lab or via simulations and instead need to learn from real-world experience, says Glocker. The technology is ready; we now need the policies to follow.Journal reference:Nature Medicine DOI: 10.1038/s41591-024-03408-6Topics:0 Комментарии 0 Поделились 29 Просмотры