• UK government sanctions target Russian cyber crime network Zservers
    www.computerweekly.com
    The UK government has sanctioned Russian entity Zservers, as well as six individual members of the cyber group and its UK representative, XHOST.In a Foreign, Commonwealth and Development Office statement under the names of foreign secretary David Lammy and minster of state for security Dan Jarvis, the government said Zservers provides vital infrastructure for cyber criminals as they plan and execute attacks against the UK.The government characterises it as a component in a supply chain that supports and conceals the operations of ransomware gangs. Ransomware exponents rely on these services, it is said, to launch attacks, extort victims and store stolen data. Putin has built a corrupt mafia statedriven by greed and ruthlessness, said Lammy. It is no surprise that the most unscrupulous extortionists and cyber criminals run rampant from within his borders.This government will continue to work with partners to constrain the Kremlin and the impact of Russias lawless cyber underworld, he said. We must counter their actions at every opportunity to safeguard the UKs national security and deliver on our plan for change.The plan for change involves building 1.5 million homes in England, fast-tracking planning decisions on 150-plus major economic infrastructure projects, as well as attaining an NHS standard of 92% of patients in England waiting no longer than 18 weeks for elective treatment.As for Zservers, the government said the group advertises itself as a bulletproof hosting (BPH) provider.Read more about Russian cyber attacks on the UK and responsesUK imposes sanctions on Conti ransomware gang leaders.NCSC exposes Russian cyber attacks on UK political processes.NCA-led Operation Destabilise disrupts Russian crime networks that funded the drugs and firearms trade in the UK, helped Russian oligarchs duck sanctions, and laundered money stolen from the NHS and others by ransomware gangs.BPH providers like Zservers, said the government, protect and enable cyber criminals, offering a range of purchasable tools which mask their locations, identities and activities. Targeting these providers can disrupt hundreds or thousands of criminals simultaneously.The UK is working alongside the US and Australia in this effort. The government cited sanctions against ransomware groups LockBit and Evil Corp as part of an ongoing campaign, which includes the National Crime Agency (NCA)s identification of a prominent member of the Evil Corp cyber crime collective who also worked as an affiliate of the LockBit ransomware gang, Aleksandr Ryzhenkov.LockBit affiliates are known, said the government, to have used Zservers as a launch pad for targeting the UK, enabling ransomware attacks against various targets, including the non-profit sector.Ransomware attacks by Russian affiliated cyber crime gangs are some of the most harmful cyber threats we face today, and the government is tackling them head on, said Jarvis. Denying cyber criminals the tools of their trade weakens their capacity to do serious harm to the UK.We have already announced new world-first proposals to deter ransomware attacks and destroy their business model. With these targeted sanctions and the full weight of our law enforcement, we are countering the threats we face to protect our national security.The list of those sanctioned is: Zservers, XHOST Internet Solutions LP, Aleksandr Bolshakov (employee), Aleksandr Mishin (employee), Ilya Sidorov (employee), Dmitriy Bolshakov (employee), Igor Odintsov (employee) and Vladimir Ananev (employee).Since Russias attack on Ukraine was launched three years ago, western countries have applied economic sanctions against Russia and Russian individuals, with limited impact, including the unintended consequence of enabling Chinese spies to penetrate Russian defence research institutes, dubbed Twisted Panda.The Economist published a podcast in 2024 evidencing a consensus that the Russian economy has proved shockingly resilient to western sanctions, thanks largely to non-Nato countries giving succour to Russia. Historically, only the prevention of the so-called war of the stray dog between Greece and Bulgaria in 1925 can be chalked up to sanctions, even if they have some limited value.Meanwhile, the Google Threat Intelligence Group has recently published a report detailing a systematic and growing convergence of cyber criminality with cyber warfare, mainly based in Russia and China.
    0 Commenti ·0 condivisioni ·25 Views
  • I tested 10 AI content detectors - and these 3 correctly identified AI text every time
    www.zdnet.com
    diyun Zhu/Getty ImagesWhen I first examined whether it's possible to fight back against AI-generated plagiarism, and how that approach might work, it was January 2023, just a few months into the world's exploding awareness of generative AI.This is an updated version of that original January 2023 article. When I first tested GPT detectors, I used three: the GPT-2 Output Detector(this is a different URL than we published before), Writer.com AI Content Detector, and Content at Scale AI Content Detection(which is now called BrandWell).Also:How to use ChatGPT: Everything you need to knowThe best result was 66% correct from the GPT-2 Output Detector. I did another test in October 2023 and added three more: GPTZero, ZeroGPT (yes, they're different), and Writefull's GPT Detector. Then, in the summer of 2024, I addedQuillBot and a commercial service, Originality.ai, to the mix. This time, I'll addGrammarly's beta checkerand a detector from Undetectable.ai.In October 2023, I removed the Writer.com AI Content Detector from our test suite because it failed back in January 2023, it failed again in October, and it failed in summer 2024. However, it now appears to work, so I'm including it in the test suite. See below for a comment from the company, which their team sent me after the original article was published in January.Also: 88% of workers would use AI to overcome task paralysis, Google study saysI've re-run all the tests to see how the detectors perform today. While I had two strong successes, the big takeaway seems to be that the results are inconsistent from one AI checker to another.What I'm testing for and how I'm doing itBefore I go on, though, we should discuss plagiarism and how it relates to our problem. Merriam-Webster defines "plagiarize" as "to steal and pass off (the ideas or words of another) as one's own; use (another's production) without crediting the source." This definition fits AI-created content well. While someone using an AI tool like Notion AI or ChatGPT isn't stealing content, if that person doesn't credit the words as coming from an AI and claims them as their own, it still meets the dictionary definition of plagiarism.Also:The best AI image generators to tryIn this experimental article, I've asked ChatGPT to help out. My words are in normal and bold text. The AI's words are italicized. After each AI-generated section, I'll show the results of the detectors. At the end of the article, we'll look at how well the detectors performed overall. Here are the test results for the above text, which I wrote myself: GPT-2 Output Detector:99.98% realWriter.com: 95% human-generated contentBrandWell AI Content Detection:Passes as humanGPTZero:98% humanZeroGPT:22% AI GPT Your Text is Most Likely Human writtenWritefull GPT Detector:1% likely this comes from GPT-3, GPT-4 or ChatGPTOriginality.ai:We are 99% confident that text is originalQuillBot: 42% of text is likely AI-generatedGrammarly: We didn't detect common AI text patterns. Your document doesn't appear to contain AI-generated textUndetectable: 100% humanHuman-written content: 8-of-10 (80%) correctLast time: 5-of-7 (71%) correctBefore we move on, let's discuss what I'll consider a "correct" analysis. If the score is numeric, anything above 80% will be counted as a correct analysis, and that's being generous. If the score has a summary, I'll look at the summary and make a judgment. With that, let's dig in some more. Explain why ChatGPT can be a real problem for teachers and editors when it comes to plagiarismChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text by predicting the next word or phrase in a given input sequence. While ChatGPT can be a useful tool for generating text and responding to prompts, it can also be a problem for teachers and editors when it comes to plagiarism.Also:4 ways to overcome your biggest worries about generative AIOne of the main reasons why ChatGPT can be a problem for teachers and editors is that it can generate text that is very similar to that of real human writers. This means that students or other individuals could potentially use ChatGPT to generate text that they could then submit as their own work, thereby committing plagiarism.For teachers and editors, detecting whether a given piece of text was generated by ChatGPT or written by a human can be very difficult, as the generated text may not contain any obvious errors or discrepancies that would reveal its artificial origin. This can make it difficult for teachers and editors to identify and prevent plagiarism, as they may not be able to tell whether a given piece of text is original or copied from another source.Overall, ChatGPT and other similar language models can be a problem for teachers and editors when it comes to plagiarism because they can generate text that is difficult to distinguish from that written by humans, making it easier for students and other individuals to commit plagiarism without being detected.GPT-2 Output Detector:99.69% realWriter.com:77% human-generated contentBrandWell AI Content Detection:Hard to tellGPTZero: 100% AIZeroGPT:100% AIWritefull GPT Detector:12% likely this comes from GPT-3, GPT-4 or ChatGPTOriginality.ai:100% AIQuillBot:83% likely AIGrammarly:We didn't detect common AI text patterns. Your document doesn't appear to contain AI-generated textUndetectable: 100% AIAI-written content: 5-of-10 (50%) correct Last time: 4-of-7 (57%) correctRemember, text shown in italics was generated by the AI. Show me at least three ways that I can detect ChatGPT was used to produce a piece of textThere are several ways that you can potentially detect whether ChatGPT or another language model was used to produce a piece of text. Here are three potential approaches you could use:Look for repetitive or unusual patterns in the text: One way to detect whether a language model was used to generate a piece of text is to look for repetitive or unusual patterns in the writing. For example, language models may repeat certain phrases or word combinations, or use unusual or inconsistent sentence structures.Check for lack of originality: Another way to detect whether a language model was used to generate a piece of text is to check for a lack of originality in the writing. If a piece of text appears to be copied from another source, or if it contains information that is widely available on the internet or in other sources, it could be a sign that it was generated by a language model.Use a plagiarism checker: Finally, you can use a plagiarism checker to check whether a piece of text appears to be copied from another source. While a plagiarism checker may not be able to definitively determine whether a language model was used to generate a piece of text, it can help you identify instances of copying or paraphrasing from other sources.Note when the answer is obviously wrong: I talked about this in my Notion AI review, where I spotlighted some text the AI wrote with seeming authority but which was blatantly wrong. If a student or writer doesn't have full command of the topic being discussed, they may not spot the inaccuracies and leave them in the AI-generated response. Of course, incorrect statements aren't just the purview of an AI writing tool. Many bloggers and commenters, for example, repeat statements made by other bloggers, creating an echo chamber of completely "false facts", showing that while intelligence may be artificial, stupidity is a universal trait.Keep in mind that it can be difficult to definitively determine whether a language model was used to generate a piece of text, and these methods may not always be effective. However, they can be useful tools to help you identify potential instances of language model-generated text.Also:This Google AI tool could be your new favorite study aid - and it's freeIn this set of tests, only the italicized content above was run through the checker suite.GPT-2 Output Detector:99.98% fakeWriter.com:73% human-generated contentBrandWell AI Content Detection:Passes as humanGPTZero: 100% AIZeroGPT:96.99% AI-generatedWritefull GPT Detector:79% likely this comes from GPT-3, GPT-4 or ChatGPTOriginality.ai:100% AIQuillBot:100% AIGrammarly:We didn't detect common AI text patterns. Your document doesn't appear to contain AI-generated textUndetectable: 100% AIAI-written content: 6-of-10 (60%) correctLast time: 5-of-7 (71%) correctList three plagiarism checkers that can detect ChatGPT textIt is worth noting that there is currently no plagiarism checker that is specifically designed to detect text generated by ChatGPT or other language models. While some plagiarism checkers may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, there is no guarantee that they will be able to definitively determine whether a piece of text was generated by a language model.With that said, here are three plagiarism checkers that you might consider using to help identify potential instances of language model-generated text:Turnitin: Turnitin is a popular plagiarism checker that is widely used by schools and universities. It compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. While it may not be able to definitively determine whether a piece of text was generated by a language model, it could potentially identify certain characteristics of language model-generated text that could indicate its artificial origin.PlagScan: PlagScan is another plagiarism checker that compares a piece of text to a database of other texts to identify instances of copying or paraphrasing. Like Turnitin, it may be able to identify certain characteristics of language model-generated text that could indicate its artificial origin, but it is not specifically designed to detect language model-generated text.Copyleaks: Copyleaks is a plagiarism checker that uses advanced machine learning algorithms to scan a piece of text and compare it to a database of other texts to identify instances of copying or paraphrasing. While it may be able to identify certain characteristics of language model-generated text, it is not specifically designed to detect language model-generated text.It is worth noting that no plagiarism checker is completely foolproof, and there is always the possibility that a piece of language model-generated text could escape detection. Therefore, it is important to use plagiarism checkers as just one part of a larger strategy to detect and prevent plagiarism.GPT-2 Output Detector:99.58% realWriter.com:74% human-generated contentBrandWell AI Content Detection:Passes as humanGPTZero: 100% AIZeroGPT:100% AIWritefull GPT Detector: 87% likely this comes from GPT-3, GPT-4 or ChatGPTOriginality.ai:100% AIQuillBot:100% AI-generatedGrammarly:No plagiarism or AI text detectedUndetectable: 100% AIAI-written content: 6-of-10 (55%) correctLast time:5-of-7 (71%) correct Online AI plagiarism checkers Most plagiarism detectors are used to compare writing against a corpus of other writing. For example, when a student turns in an essay, a product like Turnitin scans the submitted essay against a huge library of essays in its database, and other documents and text on the internet to determine if the submitted essay contains already-written content. However, the AI-writing tools generate original content, at least in theory. Yes, they build their content from whatever they've been trained on, but the words they construct are somewhat unique for each composition. Also:OpenAI pulls its own AI detection tool because it was performing so poorlyAs such, the plagiarism checkers mentioned above probably won't work because the AI-generated content probably didn't exist in, say, another student's paper. In this article, we're just looking at GPT detectors. But plagiarism is a big problem and, as we've seen, some choose to define plagiarism as something you claim as yours that you didn't write, while others choose to define plagiarism as something written by someone else you claim is yours. That distinction was never a problem until now. Now that we have non-human writers, the plagiarism distinction is more nuanced. It's up to every teacher, school, editor, and institution to decide exactly where that line is drawn. GPT-2 Output Detector:99.56% realWriter.com:98% human-generated contentBrandWell AI Content Detection:Passes as humanGPTZero: 98% humanZeroGPT:16.82% AI - Your text is human-writtenWritefull GPT Detector:7% likely this comes from GPT-3, GPT-4 or ChatGPTOriginality.ai:100% originalQuillBot:0% AIGrammarly:No plagiarism or AI text detectedUndetectable: 100% humanAI-written content: 10-of-10 (100%) correctLast time: 7-of-7 (100%) correct Overall results Overall, results stayed generally the same compared to the last round of tests. That time, we had three services with perfect scores. ZeroGPT, one of our then-perfect-scoring players, failed a test it previously succeeded. Two new detectors, Writer.com and Grammarly, didn't improve the score. In fact, both were generally unsuccessful. But Undectable got the correct answer every time.TestOverallHumanAIAIAIHumanGPT-2 Output Detector60%CorrectFailCorrectFailCorrectWriter.com40%CorrectFailFailFailCorrectBrandWell AI Detector40%CorrectFailFailFailCorrectGPTZero100%CorrectCorrectCorrectCorrectCorrectZeroGPT80%FailCorrectCorrectCorrectCorrectWritefull GPT Detector60%CorrectFailFailCorrectCorrectOriginality.ai100%CorrectCorrectCorrectCorrectCorrectQuillBot80%FailCorrectCorrectCorrectCorrectGrammarly40%CorrectFailFailFailCorrectUndetectable100%CorrectCorrectCorrectCorrectCorrectWhile there have been some perfect scores, I don't recommend relying solely on these tools to validate a student's content. As has been shown, writing from non-native speakers often gets rated as generated by an AI, and even though my hand-crafted content has no longer been rated as AI, there were a few paragraphs flagged by the testers as possibly AI-based. You can also see how the results are wildly inconsistent between testing systems. So, I would advocate caution before relying on the results of any (or all) of these tools.Let's look at the individual testers and see how each performed.GPT-2 Output Detector (Accuracy 60%) This first tool was built using a machine-learning hub managed by New York-based AI company Hugging Face. While the company has received $40 million in funding to develop its natural language library, the GPT-2 detector appears to be a user-created tool using the Hugging Face Transformers library. Of the five tests I ran, the detector was accurate in three. Screenshot by David Gewirtz/ZDNETWriter.com AI Content Detector (Accuracy N/A) Writer.com is a service that generates AI writing, oriented towards corporate teams. Its AI Content Detector tool can scan for generated content. I found this tool unreliable. While it previously failed to generate results, it ran this time. Unfortunately, its accuracy was quite low. It essentially identified each block of text as human-written, where three of the six tests were written by ChatGPT.Also:How to use ChatGPT to digitize your handwritten notes for freeAfter this article was originally published in January, the folks at Writer.com reached out to ZDNET. CEO May Habib had this comment to share: Demand for the AI detector has skyrocketed. Traffic has grown 2-3x per week since we launched it a couple months ago. We've now got the necessary scaling behind it to make sure it doesn't go down, and our goal is to keep it free - and up to date to catch the latest models' outputs, including ours. If AI output is going to be used verbatim, it absolutely should be attributed. Screenshot by David Gewirtz/ZDNETBrandWell AI Content Detection (Accuracy 40%)The third tool I found was originally produced by an AI content generation firm, Content at Scale. Subsequently, the tool migrated to BrandWell.ai, which appears to be a new name for what is now an AI-centric marketing services company.Unfortunately, the accuracy was pretty low. The tool identified all the AI content as human, as in this screenshot: This text was entirely written by ChatGPT. Screenshot by David Gewirtz/ZDNETGPTZero (Accuracy 100%) It's not entirely clear what drives GPTZero. The company is hiring engineers and sales folks, and it runs on AWS, so there are expenses and sales. However, all I could find about a service offering was a place where you could register for a free account to scan more than the 5,000 words offered without login. If you're interested in this service for GPT detection, you'll have to see if they'll respond to you with more details. Accuracy increased since the first time I ran the tests and stayed at 100% for this round. Screenshot by David Gewirtz/ZDNETZeroGPT (Accuracy 80%) ZeroGPT seems to have matured as a service since we last looked at it. When we last looked, no company name was listed, and the site was peppered with Google ads with no apparent strategy for monetization. The service worked fairly well but seemed sketchy as heck. Also:AI isn't hitting a wall, it's just getting too smart for benchmarks, says AnthropicThat sketchy-as-heck feeling is now gone. ZeroGPT presents as any other SaaS service, complete with pricing, company name, contact information, and all the rest. It still performs quite well, so perhaps the developers decided to turn their working code into more of a working business. Accuracy dropped, though. It misread one human-written test as AI. Screenshot by David Gewirtz/ZDNETWritefull GPT Detector (Accuracy 60%) Writefull sells writing support services and a free taste of its tools. The GPT detector is fairly new and worked fairly well. However, the tool has had some ups and downs in our tests. It improved from 60% to 80% previously but dropped to 60% again this time. Screenshot by David Gewirtz/ZDNETOriginality.ai (Accuracy 100%, sort of) Originality.ai is a commercial service that bills itself as an AI and plagiarism checker. The company sells its services based on usage credits. To give you an idea, all the scans I did for this article used 30 usage credits. The company sells 2,000 credits a month for $12.95 per month. I pumped 1,400 words through the system and used only 1.5% of the monthly allocation. Screenshot by David Gewirtz/ZDNETResults were great for the AI checker, but the tool failed three out of five times when using the service as a plagiarism checker. The following screenshot claims that the text pasted in was 0% plagiarised: Screenshot by David Gewirtz/ZDNETThat's wrong since all the text pasted into the tool was from this article, published online for two years. I thought, perhaps, that the plagiarism scanner couldn't read ZDNET content, but that's not the case, as this screenshot shows: Screenshot by David Gewirtz/ZDNETTo be fair, I didn't set out to check plagiarism checkers in this article. But since I'm using source material I know I pulled from my existing article, I figured the plagiarism checker would have slammed all of them as 100% plagiarized. In any case, Originality.ai did very well on the part we set out to test, the AI checker. The tool gets points for that. QuillBot (Accuracy 80%-ish) Nothing is ever easy. The first time I ran my first test through QuillBot, it said 45% of the text was likely generated by an AI. It wasn't. I wrote it. But then, after completing all the other tests, I returned to QuillBot to grab a screenshot for this section, fed it the same text that generated the 45% score, and, as you can see below, it now reports 0% AI: Screenshot by David Gewirtz/ZDNETSo, what should we make of this result? Sadly, I didn't capture a screenshot of the first time I tested this text, but it highlights the concern about relying too much on AI detectors, which are also quite capable of hallucination.Grammarly (Accuracy 40%)Grammarly is a well-known tool for helping writers produce grammatically correct content. That's not what we're testing here. Grammarly can check for both plagiarism and AI content. You can paste a document into the grammar checker, and in the lower-right corner, there's a Plagiarism and AI text check button: Screenshot by David Gewirtz/ZDNETIn this test, the tool found an existing online document that matched the text I pasted into Grammarly. That result makes sense because this is an update to an article that's been online for a few years. Yet the tool also responded, "Your document doesn't appear to contain Al-generated text". However, ChatGPT generated the entire segment. Screenshot by David Gewirtz/ZDNETUndetectable.ai (Accuracy 100%)Undetectable.ai's big claim to fame is its "humanized", which purports to take AI-generated text and make it seem human enough that AI detectors won't detect it as created by a robot. That's a capability I haven't tested, and which, to be honest, bothers me at some deep core of my being. This capability seems like cheating to me a professional author and educator.However, the company also has an AI detector, which was very much on point. Screenshot by David Gewirtz/ZDNETThe AI detector passed all the tests we fed it. Notice the indicators showing flags for other content detectors. The company said: "We developed multiple detector algorithms modeled after those major detectors to provide a federated and consensus-based approach. They do not directly feed into the listed models, rather, the models are each trained based on results they've generated. When it says those models flagged it, it's based on the algorithm we created and updated for those models."Those algorithms aren't perfect, because when I ran the same text through GPTZero, it declared the text as 98% human, which would not merit a red-warning indicator.Even so, Undetectable detected all five tests we ran through, earning a perfect 100% score.What about OpenAI's own ChatGPT detector?Well,OpenAI pulled it last year because the detector wasn't particularly accurate. As of August, it was supposed to be 99% accurate. However, there's still no sign of a release four months later. Its claimed accuracy level is also a little tough to believe because ChatGPT is far from 100% accurate.Also:Will OpenAI's new AI detection tool put an end to student cheating?But, in any case, as my ZDNET buddy Steven Vaughan-Nichols writes, it's not available yet. We'll give the tool a spin when it's released for public use. Stay tuned. Final thoughtsFirst, I'll let ChatGPT have its say:AI-driven plagiarism is a growing problem, as AI tools like Notion AI and ChatGPT can generate text that is difficult to distinguish from that written by humans. There are several ways to detect whether a piece of text was generated by an AI, such as looking for repetitive or unusual patterns, checking for a lack of originality, or using a plagiarism checker. Plagiarism checkers like Turnitin, PlagScan, and Copyleaks may be able to identify certain characteristics of language model-generated text, but they are not foolproof.Even the speciality tools I found were woefully unsuited to the task. There was some limited improvement over my tests in January. However, I certainly wouldn't feel comfortable threatening a student's academic standing or accusing them of cheating based on the results of these tools. Also: Surprise! AI chatbots don't increase student cheating after all, research findsAt this point, I don't think we're ready to let AI fight AI. Instead, we'll have to rely on human perception (which means folks have to pay attention) to identify works generated by an AI. Don't immediately assume that text is machine-generated if it's stupid or wildly inaccurate. As we all know, humans can write some bad stuff. Also:Could someone falsely accuse you of using AI-generated text? This could be whySo, what do you think? Can you tell if a piece is AI-generated? Have you used these AI writing tools in the months since they suddenly became available? Are they a help or hindrance to education and journalism? Let us know in the comments below. Here's how ChatGPT answered this:As a language model, I can't see or interact with physical pieces of writing, so I can't tell if a piece is AI-generated or not. However, I can tell you that there are several AI writing tools that are available, and they can be used to generate a wide range of written content, including news articles, fiction, poetry, and more.I don't have personal experience using these tools since I am just a computer program and don't have the ability to browse the internet or use physical tools. However, I can tell you that some people have found AI writing tools to be helpful in education and journalism, as they can save time and effort by automating certain tasks. Others have expressed concerns about the potential for these tools to be used to produce misleading or biased content or to replace human writers. Ultimately, whether AI writing tools are a help or hindrance depends on how they are used and the context in which they are used.So, yeah, there's that. You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.Artificial Intelligence
    0 Commenti ·0 condivisioni ·24 Views
  • How Insurance Companies Can Rebuild Trust
    www.forbes.com
    Photo of a medical insurance claim form.gettyThe online response to the recent killing of UnitedHealthcare CEO Brian Thompson revealed long-brewing resentments against insurance companies. With their reputations so damaged, how can payers rebuild trust with American consumers?To rebuild trust, payers must reestablish its foundation: credibility. Transparency in cost and quality is key to reestablishing credibility and ultimately trust. Insurers must be responsive to consumer needs, and they will need to embrace a new business model that aligns incentives around outcomes that matter to patients. Consumers need to be empowered with clear information to make informed decisions about their care.As I wrote in a recent column, anger towards insurers and healthcare delivery organizations is growing as care gets more expensive and more complex without a corresponding increase in positive health outcomes. According to polling, Americans rating of the quality of healthcare in the U.S. is at its lowest point in Gallups study since 2001, when they began tracking the metrics.Misaligned financial incentives are at the heart of building resentment. Under the current business model, insurance companies make more money the more claims they deny. In the extreme, if patients die because care was denied outright or delayed through the use of prior authorization or a step therapy there is little impact for the insurer.At the same time, its remarkably challenging for consumers to get information about cost, quality and outcomes in healthcare. Even after theyve undergone a procedure and are looking at the explanation of benefits sent by their insurer, they arent always sure what services they received, how much those services cost or what if anything they owe. For too many, the message is: my insurer doesnt care about me or my health needs. This erosion of trust stems from years of broken promises and opaque practices that insurers must now actively work to reverse.MORE FOR YOUTrust must be earned by establishing credibility; its the outcome of consistently meeting promises, communicating value and acting transparently and predictably. When insurance providers fail to follow through on promiseswhether through surprise billing, unexpected denials, or opaque decision-makingit sets the entire system back, and positions insurers as the enemy rather than a partner in maintaining health.Instead of seeing insurance as a valuable investment in their health and financial security, consumers now expect their insurance companies to disappoint them with denied claims, despite the high premiums and deductibles they pay month after month.Customers believe they are following the rules and fulfilling their obligations, but because insurers operate so opaquely, they often end up blindsided with bills in the thousands of dollars. This is how trust is lost.A small way to rebuild lost trust is to acknowledge mistakes: There is value in saying this wasnt clear. Andrew Witty, CEO of UnitedHealth Group did this effectively in an op-ed in The New York Times in the aftermath of the Brian Thompson killing, saying health care is both intensely personal and very complicated, and the reasons behind coverage decisions are not well understood.Coverage decisions tend to be a black box for consumers, who feel frustrated and powerless without the ability to figure out the answer on their own. Insurers that openly acknowledge their mistakes and demonstrate a commitment to making things right not only regain credibility but also humanize their operations.But rebuilding trust requires more than wordsit demands transparency, clear communication and a consistent effort to ensure consumers understand the value theyre paying for.Transparency is a foundation of trust, and for insurers, this means replacing the opacity that has long defined healthcare with clear, consistent communication. Say what youre going to do, do what you say, and if theres a gap, explain why.A true commitment to transparency requires standardized, easily accessible information on pricing, coverage policies and claims decisions. Consumers should not need a degree to understand their coverage or be blindsided by hidden charges. Rules must be clear, logical and consistently applied. And when insurers cannot fulfill a request, they must explain why, and better yet, offer reasonable alternative solutions rather than merely saying no.Engaging in these principles consistently over time will re-establish credibility and earn the trust of consumers. Insurers can deliver valuable services and work as credible experts and partners. Payers are best positioned to be partners in good health when incentives are aligned to reward better health outcomes. This means embracing a new business model that makes prevention, early intervention and coordinated care important to the bottom linereplacing the outdated focus on claim denials. And they need to do this in partnership with healthcare delivery organizations who have struggled to embrace a different model.By committing to transparency and aligning incentives around a new business model, insurers can empower consumers with the tools and information they need to make confident decisions while demonstrating value in ways that matter: better health outcomes, fewer financial surprises and simpler processes.Ultimately, embracing this shift is a competitive advantage for payers. Repairing the broken relationship with the publicbeing perceived as allies rather than adversariesis essential to remaining profitable.Resentment toward insurance companies is not inevitableits a symptom of an opaque, broken system. A commitment to honesty, clarity and demonstrating value is the first step toward eliminating hostility and creating a healthier future.
    0 Commenti ·0 condivisioni ·24 Views
  • From Reducing Risk To Improving Operations, The Role Of AI In Insurance
    www.forbes.com
    AI in InsurancegettyFrom determining risk and setting premium rates to determining claim payouts, and optimizing customer outreach, insurance is highly dependent on data. Anywhere theres a lot of data, AI provides significant value, driving greater efficiency, providing more optimized and focused delivery, and improving human ability. This especially the case with AI in insurance. AI is significantly impacting the insurance industry, improving efficiency, accuracy, and customer experience across various processes.AI Helps Assess Insured RiskInsurance is all about managing risk. AI being used to help with both risk assessment and underwriting by analyzing vast amounts of data from a wide range of structured and unstructured data sources to better evaluate the overall risk associated with insuring a person, asset, or organization. AI-driven underwriting tools enable insurers to make more accurate decisions and tailor policies to individual risk profiles.These AI systems are also consuming sensor and wearable data, including data from telemetric devices, health wearables, and other systems. AI driven telemetric systems give insurance firms the ability to more accurately offer usage based insurance policies that are better tailored to customer real-world behaviors rather than modeled predictions of their behavior based on more general characteristics.Handling Claims with AI in InsuranceAs part of managing risk, customers want to make sure that the insurance they pay for covers their real-world risks, and in the case of a loss happening, they want to make sure that their claims are rapidly and accurately handled. Likewise, insurers want to make sure that they pay claims that match their risk profiles, are done in an expedient and efficient manner, and with as little fraud as possible.AI is providing significant ability to help accelerate, validate, and make more efficient the whole claims process, especially when there are large scale losses, such as after a natural disaster when a lot of people are filing a lot of claims at the same time. AI streamlines claims processing by automatically evaluating claims, verifying information, and assessing damage through image recognition and NLP-based document processing. This use of AI helps process claims faster, reduces errors, and improves customer satisfaction by offering quicker payouts.MORE FOR YOUFor example, many auto insurance companies are making use of AI to automatically help triage auto damage claims. These AI systems handle the majority of the inbound claims process, from submitting your claim, using customer-submitted photos of damage to automatically classify and categorize damage, adding to claims documentation, and then automatically evaluate the claims to verify the information to match covered risks and assess a fraud potential score.AI is even helping insurers respond more quickly and effectively to natural disasters by predicting the impact of events like hurricanes, floods, or wildfires in advance of claims being filed, and automating the claims process for affected customers. AI-driven models can assess damage through satellite imagery, prioritize claims, and expedite payouts, improving the efficiency and effectiveness of disaster response.Since fraud is not uncommon in the insurance industry, the use of AI helps detect and prevent fraudulent activities by analyzing patterns and anomalies in claims data. These systems can identify suspicious claims that deviate from normal behavior, reducing the incidence of fraud which not only saves insurers money but helps lower the overall costs of insurance premiums for everyone.Improving Insurance Customer Service with AIAs is the case in most industries, insurance firms are using AI-powered chatbots, messaging systems, and first-line email and voice support to handle front-line customer service needs. These AI-powered chatbots and virtual assistants provide 24/7 customer support, handling inquiries, policy management, and claim status updates. Insurers use AI chatbots to improve customer experience, reduce response times, and free up human agents to handle more complex issues. These chatbots are able to escalate customer issues to humans if they're not able to handle them, keeping the human in the loop for what are often very stressful and high impact situations.Likewise, insurers are using AI to offer much more personalized offers and options to insurance customers. AI-augmented insurance pricing and marketing systems analyze customer data to offer personalized insurance policy recommendations based on individual needs and risk profiles. This helps insurers provide more relevant products, improving customer satisfaction and increasing the likelihood of policy purchases.Improving Operations with AI in InsuranceWhere theres people and processes, there will be document-based systems and potentially lots of inefficiency that can be improved with AI. Insurance is no exception, with lots of human-based decision and approval processes, documents that need to be filed from the customers and insurers, and internal operations that can be made more efficient. Optimizing these insurance operations can not only save the insurer a lot of money, but those efficiencies can be passed to the customer, providing more value for their insurance premium or even lowering insurance premiums.AI-based predictive analytics systems help improve customer retention by using data to determine which customers may be potentially canceling their policies or moving to another competitive provider. Or perhaps a customer has a major life change or organizational change that would require a significant change or adjustment in the insurance premiums. There's increasing use of AI to help analyze those behavioral patterns and provide some targeted retention strategies. By analyzing behavior patterns, AI can help insurers implement targeted retention strategies, such as personalized offers or enhanced customer service, to reduce churn and improve customer loyalty.AI is also helping reduce the overall dependence on paper and documents.The insurance industry was quick to move off of paper with online signatures and online documents. AI automates the extraction of information from documents, such as policy applications, claims forms, and medical records. NLP and AI tools are helping to streamline the data entry into various insurance systems, reducing manual errors.While insurance has always been a bit of a gamble, from predicting and managing risk, to handling the payouts when those risks occur, the use of AI in insurance is at least helping to drive more accurate data-based decision making and improving operations so that risks all around are more effectively managed.
    0 Commenti ·0 condivisioni ·27 Views
  • Elon Musk Talks DOGE, AI, and DEI in Dubai
    time.com
    Elon Musk speaks via videocall at the World Governments Summit in Dubai on Feb. 13, 2025.Waleed ZeinAnadolu/Getty ImagesBy Jon Gambrell / APFebruary 13, 2025 2:30 AM ESTDUBAI, United Arab Emirates Elon Musk called Thursday to delete entire agencies from the U.S. government as part of his push under President Donald Trump to radically cut spending and restructure its priorities.Musk offered a wide-ranging survey via a videocall to the World Governments Summit in Dubai, United Arab Emirates, of what he described as the priorities of the Trump administration interspersed with multiple references to thermonuclear warfare and the possible dangers of artificial intelligence.We really have here rule of the bureaucracy as opposed to rule of the peopledemocracy, Musk said, wearing a black T-shirt that read: Tech Support. He also joked that he was the White Houses tech support, borrowing from his profile on the social platform X, which he owns.I think we do need to delete entire agencies as opposed to leave a lot of them behind, Musk said. If we dont remove the roots of the weed, then its easy for the weed to grow back.While Musk has spoken to the summit in the past, his appearance Thursday comes as he has consolidated control over large swaths of the government with Trumps blessing since assuming leadership of the Department of Government Efficiency. Thats included sidelining career officials, gaining access to sensitive databases and inviting a constitutional clash over the limits of presidential authority.Musks new role imbued his comments with more weight beyond being the worlds wealthiest person through his investments in SpaceX and electric carmaker Tesla.His remarks also offered a more-isolationist view of American power in the Middle East, where the U.S. has fought wars in both Afghanistan and Iraq since the Sept. 11, 2001, terror attacks.A lot of attention has been on USAID for example, Musk said, referring to Trumps dismantling of the U.S. Agency for International Development. Theres like the National Endowment for Democracy. But Im like, Okay, well, how much democracy have they achieved lately?He added that the U.S. under Trump is less interested in interfering with the affairs of other countries.There are times the United States has been kind of pushy in international affairs, which may resonate with some members of the audience, Musk said, speaking to the crowd in the UAE, an autocratically ruled nation of seven sheikhdoms.Basically, America should mind its own business, rather than push for regime change all over the place, he said.He also noted the Trump administration's focus on eliminating diversity, equity and inclusion work, at one point linking it to AI.If hypothetically, AI is designed for DEI, you know, diversity at all costs, it could decide that theres too many men in power and execute them, Musk said.On AI, Musk said he believed Xs newly updated AI chatbot, Grok 3, would be ready in about two weeks, calling it at one point kind of scary. He criticized Sam Altmans management of OpenAI, which Musk just led a $97.4 billion takeover bid for, describing it as akin to a nonprofit aimed at saving the Amazon rainforest becoming a lumber company that chops down the trees.Musk also announced plans for a Dubai Loop project in line with his work in the Boring Companywhich is digging tunnels in Las Vegas to speed transit. However, he and the Emirati government official speaking with him offered no immediate details of the plan.Its going to be like a wormhole, Musk promised. You just wormhole from one part of the cityboomand youre out in another part of the city.More Must-Reads from TIMEInside Elon Musks War on WashingtonIntroducing the 2025 ClosersColman Domingo Leads With Radical LoveWhy, Exactly, Is Alcohol So Bad for You?The Motivational Trick That Makes You Exercise Harder11 New Books to Read in FebruaryHow to Get Better at Doing Things AloneColumn: Trumps Trans Military Ban Betrays Our TroopsContact us at letters@time.com
    0 Commenti ·0 condivisioni ·50 Views
  • www.techspot.com
    A hot potato: Bobby Kotick, the former Activision Blizzard CEO who gamers crowned the most hated figure in the industry, has given a lengthy interview that's unlikely to improve his public image. Kotick calls the many harassment lawsuits against his ex-company "fake" and planned by a union to increase its membership. He also says the acquisition of Project Gotham Racing studio Bizarre Creations was a bad decision, labeled one CEO the worst in the industry, and slammed the Warcraft movie. Kotick made his comments during an interview on Kleiner Perkins' Grit podcast. Activision Blizzard faced several lawsuits and investigations after the California Department of Fair Employment and Housing (DFEH) sued the company over allegations of a toxic workplace culture, widespread sexual harassment, discrimination against women, and an environment described as having a "frat boy" culture.A Wall Street Journal report claimed Kotick was aware of the allegations "for years" but failed to do anything or even tell the board. In response, Activision Blizzard staff launched a petition demanding he step down.When asked about the lawsuits and petition, Kotick said, "That was fake.""I can tell you exactly what happened," Kotick continued. "The Communication Workers of America [CWA] union started looking at technology. They kept losing because they represented the News Guild, Comcast, and they realized they were losing members at a really dramatic rate, so they gotta figure out: how do they get new union members? So they first targeted a bunch of different businesses - Google, some other tech companies, Tesla and SpaceX, and us.""It's the power of unions," Kotick said. "I didn't really understand this until we went through this process. They were able to get a government agency, the EEOC [Equal Employment Opportunity Commission] and a state employment agency called the Department of Fair Employment and Housing [DFEH], to file fake lawsuits against us and Riot Games making allegations about the workplace that didn't... weren't true, but they were able to do this." // Related StoriesActivision paid $54 million in 2023 to settle a lawsuit brought by the California Civil Rights Department (CRD) over accusations of widespread gender and pay inequality. The sexual harassment and discrimination suit was settled for $18 million in 2022. Meanwhile, Riot Games paid $100 million in 2022 to settle a similar lawsuit.Activision also had to pay the Securities and Exchange Commission (SEC) $35 million in 2023 for failing to disclose workplace harassment issues to investors and violating whistleblower protection rules."They're [the CWA union] so clever," Kotick said. "They realized that would be a thing that they then could come into a company - because we pay well, we have great benefits, great working environment, and they could say, 'hey, the culture is bad', 'people are harassed' or 'they're retaliated against', or 'there's discrimination'"."They came up with this plan, hired a PR firm, and they started attacking our company. They got these two agencies to file these lawsuits to claim there was some sexual harassment... We didn't have any of that. Ultimately they had to admit that this was not truthful and withdraw the complaints."Kotick claims he fired people "on the spot" if he was made aware of inappropriate conduct in the workplace.The ABetterABK workers group, which supported many Activision Blizzard employees during the lawsuits, responded to Kotick's comments. It stated, "The executives of our company did not protect us and often made the situation worse or directly perpetuated the harm.""The trauma, discrimination, and abuse that our coworkers and former coworkers endured is not fake or a 'plan to drive union membership'," the group added. "Our unions were born from the very real and harmful way executives reacted when made aware of these situations."Elsewhere, Kotick said ex-Electronic Arts and Unity CEO John Riccitiello was the worst CEO the video game industry. An opinion he seems to be basing on EA's financial performance during Riccitiello tenure April 2007 to March 2013.Kotick also said that Activision's decision to buy Bizarre Creations, maker of Project Gotham Racing, for $67.4 million in 2007 was a bad one. The studio released Geometry Wars: Retro Evolved 2, Blur, and James Bond 007: Blood Stone after being acquired. Activision announced in 2010 that it was closing Bizarre Creations.Kotick not only failed to remember the name of the studio "that did the driving game for Xbox" he also got its location wrong, saying it was in Manchester instead of Liverpool. At least he got the right country.It also appears that Kotick is not a fan of the 2016 adaptation of Warcraft, calling it one of the worst movies he's ever seen. He said it was a distraction that impacted the development of the WoW game and one of the reasons veteran designer Chris Metzen left the company in 2016."Our expansions were late. You know, patches weren't getting done on time. And the movie was terr it was one of the worst movies I've ever seen."Warcraft made just $47 million in the US, but managed to generate $439 million worldwide, mostly thanks to its popularity in China, making it the highest-grossing film based on a video game at the time. That still wasn't enough to break even as its production, marketing, and distribution costs reached around $450 million to $500 million. Reviews were mixed, but it was certainly better than many other video game adaptions especially those made by Uwe Boll.
    0 Commenti ·0 condivisioni ·25 Views
  • Samsung Galaxy S25 vs. iPhone 16
    www.digitaltrends.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Table of ContentsTable of ContentsSamsung Galaxy S25 vs. iPhone 16: specsSamsung Galaxy S25 vs. iPhone 16: designSamsung Galaxy S25 vs. iPhone 16: displaySamsung Galaxy S25 vs. iPhone 16: performanceSamsung Galaxy S25 vs. iPhone 16: batterySamsung Galaxy S25 vs. iPhone 16: camerasSamsung Galaxy S25 vs. iPhone 16: software and updatesSamsung Galaxy S25 vs. iPhone 16: special featuresSamsung Galaxy S25 vs. iPhone 16: price and availabilitySamsung Galaxy S25 vs. iPhone 16: verdictTheres little doubt that Apple and Samsung are two of the biggest rivals in the world of flagship smartphones. Every February and September brings a new smartphone that tries to leapfrog ahead of its main competitor. This year, thatsSamsungs Galaxy S25 coming on the heels of Septembers iPhone 16.With few design changes, the Galaxy S25 doesnt look much different from last years Galaxy S24, but Samsung has packed a lot of new technology and software improvements under the hood to produce an AI-forward smartphone thats ready for prime time. With Galaxy AI in its second generation and the Gemini now front and center, the Galaxy S25 promises to usher the companys flagships into a new era of AI smartphones.Recommended VideosAndroid smartphones arent the only ones playing that game, though. Apple is coming in from behind, but the iPhone 16 is part of Apples first lineup of phones built for Apples AI generation a new suite of tools under the banner of Apple Intelligence.RelatedOf course, theres more to both of these leading smartphones than just AI features, and Apple, in particular, has made a few other interesting enhancements to its iPhone 16 this year. Does Samsungs new Galaxy S25 measure up to these and are new AI features enough to tilt the scales in Samsungs favor? Lets find out.Galaxy S25iPhone 16Size146.9 x 70.5 x 7.2 mm (5.78 x 2.78 x 0.28 inches)147.6 x 71.6 x 7.8 mm (5.81 x 2.82 x 0.31 inches)Weight162 grams (5.7 ounces)170 grams (6 ounces)Screen size6.2-inch FHD+ Dynamic AMOLED 2X screen with 1Hz to 120Hz adaptive refresh rate6.1-inch Super Retina XDR OLED displayScreen resolution2340 x 1080 resolution at 416 pixels per inch2556 x 1179 resolution at 460 pixels per inchStorage128GB, 256GB128GB, 256GB, 512GBMicroSD card slotNoNoTap-to-Pay servicesGoogle WalletApple PayProcessorQualcomm Snapdragon 8 EliteApple A18RAM12GB8GBSoftwareAndroid 15 with One UI 7iOS 18CamerasRear: 50-megapixel primary, 12MP ultrawide, 10MP 3x telephotoFront: 12MPRear: 48-megapixel primary, 12MP ultrawideFront: 12MPVideoRear: Up to 8K at 30 frames per second (fps), 4K at 60 fps, 1080p at 240fps for slow motionFront: Up to 4K at 60 fpsRear: Up to 4K at 60 frames-per-second (fps), FHD at 60 fps, and 240 fps for slow motion, with Dolby VisionFront: Up to 4K at 60 fpsBluetoothBluetooth 5.4Bluetooth 5.3PortsUSB-CUSB-CBiometricsUnder-display fingerprint sensorFace ID facial recognitionWater ResistanceIP68IP68Battery4,000mAh25W fast charging15W Qi wireless charging4.5W reverse wireless charging3,561mAh20W fast charging25W MagSafe charging15W Qi2 wireless chargingApp MarketplaceGoogle PlayApple App StoreNetwork Support5G, Wi-Fi 7 (802.11be)5G, Wi-Fi 7 (802.11be)ColorsNavy, Silver Shadow, Mint, Icy Blue, Blueblack, Coralred, PinkgoldBlack, White, Pink, Teal, UltramarinePriceStarting at $800Starting at $799Samsung Galaxy S25 Andy Boxall / Digital TrendsIts becoming more apparent every year why many smartphone manufacturers work hard to create rear camera arrays that are unique and stylish. These have become a signature aesthetic feature in an era where most smartphones would otherwise be virtually indistinguishable. After all, when even two of the most iconic flagship smartphone brands in the world share the same flat design, there has to be something to set them apart.With the Galaxy S25, Samsung has proven that its all-in on a design that was once the exclusive domain of the iPhone.Apple went with flat sides and an edge-to-edge screen withits 2020 iPhone 12 lineup, and it seems that 2024 was the year when many Android rivals decided to follow suit, ditching their curvy edges and screens of yesteryear and going for the same look.The average user picking up an uncased Galaxy S24 could be forgiven for thinking they were holding an iPhone, and the Galaxy S25 doesnt change the design in any meaningful way.There are plenty of subtle differences between the Galaxy S25 and iPhone 16, of course, and anyone holding both in their hands will notice they dont quite feel the same. The Galaxy S25 is also slightly smaller and thinner than the iPhone 16 an impressive feat considering is boasts a 0.1-inch larger display. Nevertheless, both look eerily similar when viewed from the front and sides.Andy Boxall / Digital TrendsOf course, thats where the camera arrays come in. Flip the two phones over, and youll have no problem identifying them, and thats not just because of the company logos on the back. Samsung has stuck to its distinctive array of three protruding lenses that feature the more minimalist aesthetic introduced when it got rid of the camera bump with the Galaxy S23. Its a distinctive look for Samsungs Galaxy S-series phones, and were glad theyve stayed with it.Perhaps ironically, Apples iPhone 16 moved away from its iconic camera bump this year in favor of a pill-shaped bump that houses its dual lenses. Its a nice look that departs from the square arrangement thats been the norm since 2019. Its a throwback to the iPhone X era and makes a lot of sense for a phone with only two cameras. Previous models seemingly tried too hard to mirror the iPhone Pro look; the iPhone 16 charts its own course.Both phones stick with an aluminum frame Apple and Samsung reserve titanium for their higher-end flagships and glass on the front and back. The Galaxy S25 uses Gorilla Glass Victus 2, first introduced on the Galaxy S23, while Apple uses its own comparatively strong Ceramic Shield glass that it claims is 50% stronger for the iPhone 16. However, unlike Samsung, which uses Gorilla Glass on both sides of the Galaxy S25, Apples Ceramic Shield only covers the front display.The Galaxy S25 and iPhone 16 both feature an IP68 rating for dust and water resistance. However, Apple certifies the iPhone for immersion in a maximum depth of 6 meters of water for up to 30 minutes while Samsung sticks with the more common 1.5 meter standard for the Galaxy S25. Winner: TieNirave Gondhia / Digital TrendsSamsung traditionally puts its best displays on its entire Galaxy S-series lineup, which means the Galaxy S25 has essentially the same high-quality Dynamic AMOLED 2X panel as the Galaxy S25 Ultra, differing only in size and resolution.With a variable refresh rate that runs from 1 to 120Hz and supports an always-on display mode, the Galaxy S25 has a significant edge over the iPhone 16. Apple reserves its best displays for its Pro models, inscrutably leaving even its 2024 model with a basic OLED panel and a fixed 60Hz refresh rate.The iPhone 16 doesnt have an always-on display, although, despite its limited refresh rate, it still boasts vibrant colors and deep blacks. Its an excellent display for everyday use as long as you dont care about fast-paced gaming, smooth scrolling, or better power efficiency for watching videos. It can also reach 2,000 nits of peak outdoor brightness and get as dim as one nit so you can read in the dark without hurting your eyes or disturbing those around you.Joe Maring / Digital TrendsHowever, the Galaxy S25 beats those specs, too, reaching a maximum of 2,600 nits when used outdoors and the same minimum single-nit brightness as the iPhone 16. Still, weve had no problems seeing either smartphone, even in bright outdoor sunlight, so Samsungs higher rating here feels like its mostly a paper spec rather than something that makes any practical difference for most folks.Nevertheless, a smartphone with a 60Hz display in 2025 feels absurd, especially when we know Apple can do better. The Galaxy S25 easily takes this round for its variable refresh rate alone, not to mention the always-on display feature that accompanies it. Winner: Samsung Galaxy S25QualcommIts always been challenging to compare performance specs for Android and Apple devices due to the significant differences in hardware and software. Flagship Android devices typically use Qualcomm Snapdragon chips, while Apple builds the iPhone on its own A-series silicon.That continues with the Galaxy S25, which packs in an optimized for Galaxy version of Qualcomms latest Snapdragon 8 Elite chip. This is the direct successor to the Snapdragon 8 Gen 3 used in the S24, but Qualcomm has eschewed generational tags in favor of the new Elite brand. Its the most powerful piece of silicon Qualcomm has ever made, with incredible new GPU capabilities and an even more potent neural processing unit (NPU) for handling AI tasks.Joe Maring / Digital TrendsMeanwhile, the iPhone 16 bucks Apples recent trend of using year-old silicon in its standard iPhone models, adopting the latest A18 chip instead. Thats a notch below the A18 Pro used in its premium flagships, but its not as much of a downgrade as you may think. The difference primarily comes down to the A18 having a 5-core GPU, which is one less core than the A18 Pro. Both versions have the same 6-core CPU and 16-core Neural Engine (NPU), along with increased memory bandwidth.Specs aside, the Galaxy S25 and iPhone 16 pack more power than most folks are likely to need. You can read our detailed comparison of the Snapdragon 8 Elite and Apple A18 Pro if you want the full rundown. Both phones run buttery smooth during everyday tasks, have more than enough oomph for the latest games, and have performance to spare for the complex on-device AI tasks both platforms offer.If anything, the real battle comes down to high-end mobile gaming. Its a close race here as the shortchanged GPU on the A18 gives the Galaxy S25s Adreno GPU a lead on paper, but the iPhone ecosystem carries the day by offering the kind of AAA console games on the App Store that can actually take advantage of this extra power. Winner: TieNirave Gondhia / Digital TrendsTheres only so much battery you can squeeze into a mid-sized smartphone like the Galaxy S25 or iPhone 16, which makes getting the best battery life more about power efficiency than raw capacity. Thankfully, the leading-edge silicon in these phones delivers on that in spades.The Galaxy S25 packs in a 4,000mAh cell, while the iPhone 16 battery is slightly smaller at 3,561mAh. Both will easily get you through a full day of heavy use, although you should expect theyll need to hit a charger every night.Joe Maring / Digital TrendsYoull also find the two phones are evenly matched when that time comes. Samsung still hasnt pushed its smallest flagship above 25W wired charging, and thats where Apples entire iPhone lineup has always sat. Getting from 0-50% should take about 30 minutes with an appropriate charger, after which youll be looking at another hour or so before theyre fully topped up.The iPhone 16 gains support for 25W wireless charging with Apples proprietaryMagSafe charger and also supports standard 15W Qi2 charging with the Magnetic Power Profile (MPP) essentially the open-source version of MagSafe. Samsungs Galaxy S25 doesnt have direct Qi2 support (there are no magnets in the phone), but Samsung offers a magnetic case that effectively adds Qi2 for those who want it, delivering wireless charging speeds up to 15W. Theres also reverse 4.5W wireless charging that can be used to power up a set of Galaxy Buds or other low-power Qi-capable devices.Winner: TieNirave Gondhia / Digital TrendsAlthough Samsungs Galaxy S25 isnt a camera powerhouse like the Galaxy S25 Ultra, it still packs in a surprisingly capable camera system, with a 50-megapixel (MP) primary wide lens joined by a 12MP ultra-wide and a 10MP 3x telephoto. Those are identical hardware specs to last years Galaxy S24, but Samsung is taking advantage of new computational photography features in the Snapdragon 8 Elite chip to improve low-light photography.This year, Samsung is also beefing up its videography features with a new Nightography mode for low-light recording, plus support for the professional Log V3 video format. The Galaxy S25 also looks like it will continue last years course correction away from the overly vibrant and saturated photos that Samsungs phones have long been known for, with the companys new ProVisual Engine providing a much more natural and balanced look.Like the Galaxy S25 lineup, Apples best photographic features are the exclusive domain of its more expensive iPhone Pro models. However, the base iPhone 16 model is no slouch, and Apple has been making a concerted effort over the past two years to close the camera gap between its standard and pro models.Joe Maring / Digital TrendsThat includes a bump to a 48-megapixel primary camera what Apple now calls its Fusion camera that can provide an optical-quality 2x photo in 12MP resolution by cropping to the center pixels. That first arrived on last years iPhone 15, and the specs are mostly the same this year; the Fusion branding and an anti-reflective coating appear to be the most significant changes to the primary camera this year.However, the ultrawide gains autofocus capabilities, bringing the previously pro-exclusive macro photography feature to the iPhone 16. The tandem lenses also provide the ability to capture Spatial Photos and Videos that can be viewed on an Apple Vision Pro. Add in zero shutter lag and Dolby Vision HDR recording, and the camera system on the iPhone 16 can do many things that were once limited to the pricier pro models.Thats not all, though, as the iPhone 16 also gains support for a dramatically improved version of Apples Photographic Styles. The original versions of these have been around since the iPhone 13 was released in 2021, but they never garnered much attention in their original form, which only gave you four basic fixed styles to choose from. The iPhone 16 expands that list to 15 preset styles you can customize, adjusting the tone and color using a two-axis slider control. However, what really sets the new Photographic Styles apart is that theyre entirely non-destructive. Not only can you apply them after the fact, but you can also remove a style, tweak an existing one, or switch to a new one for any photo youve already taken.This gives the iPhone 16 a definite edge over the Galaxy S25, which really feels like Samsung phoned it in on the cameras this year.Winner: iPhone 16Joe Maring / Digital TrendsAs youd expect, the Galaxy S25 ships with the latest version of Android 15 out of the box, plus Samsungs One UI 7 layered on top. While the look and feel of One UI will be readily familiar to Samsung fans, the company has made some interesting changes this year to take the user experience to a whole new level.One UI 7 delivers what Samsung promises to be a significant new look over its predecessor that reduces clutter and takes customization to new heights. There are more One UI widgets and cleaner home and lock screens. A new Now Bar thats exclusive to the Galaxy S25 offers up iPhone-like Live Activities, and a Now Brief promised to keep you apprised of everything you need to know at any given time of the day, from news and weather to upcoming appointments and an overview of how your health is doing. Its a nice maturing of the One UI platform, and we love it.By comparison, the iPhone 16 continues Apples tradition of ushering in significant new iOS releases with each new model. This year, thats iOS 18. The core software improvements are much more iterative than what Samsung has done with One UI 7, with Apple putting most of its efforts into Apple Intelligence a new suite of AI features that are (mostly) exclusive to the iPhone 16 lineup. Apple Intelligence has been rolling out in stages; while Decembers iOS 18.2 release brought most of what Apple has promised for this years iPhones, were still waiting on improvements to Siri that arent expected to arrive until iOS 18.4 in April.Joe Maring / Digital TrendsBoth One UI 7 and iOS 18 are (or will be) available for older Galaxy and iPhone models, but only the latest models will get everything these two operating systems have to offer. Only the iPhone 15 Pro and iPhone 15 Pro Max can benefit from the Apple Intelligence features in iOS 18, and while Samsung is rolling its Galaxy AI tools out to older models, well have to wait and see which ones make the cut as One UI 7 has yet to roll out to older models.In terms of updates, Samsung is maintaining its usual promise of seven years of Android updates, which means the Galaxy S25 will someday be able to run Android 22 when it arrives in 2031. We expect most folks will be shopping for a new phone by then, but its nice to know we have the option. Apple doesnt make specific update promises for the iPhone, but it has a proven track record for updates; it was offering four or five years in the days when most Android phones rarely got two. The iPhone 16 should make it to at least iOS 22, but we wouldnt be surprised if it goes one or two releases beyond that, considering the iPhone XS and iPhone XR, which shipped with iOS 12 in 2018, can run iOS 18 today. Winner: TieNirave Gondhia / Digital TrendsIt seems everything is about AI these days, and the Galaxy S25 and iPhone 16 both lean heavily into these features. As we mentioned in the last section, Samsung has its Galaxy AI tools that are now in their second generation while Apple is just getting started with Apple Intelligence on the iPhone 16 lineup.With the Galaxy S25, Samsungs Galaxy AI suite feels ready for prime time, although many of the tools are still of dubious value. The most significant change is that Samsung has shown its Bixby voice assistant to the door and embraced the Gemini Era by adopting Gemini Live as the standard voice assistant. This gives the Galaxy S25 a massive leg up over the iPhone 16, which still uses Apples beleaguered Siri. Apples voice assistant squandered a three-year head start to become something of a punch line, and in a somewhat ironic twist, many have observed that Siri has gotten worse in iOS 18.However, the Siri improvements promised by Apple Intelligence have yet to arrive. The first of these should come in iOS 18.4 in a few months, but a more conversational Siri could take until an iOS 19 release sometime next year. In the meantime, Gemini Live is here now on the Galaxy S25, and its leaps and bounds ahead of anything Apple has on deck.Thats not to say iPhone users are entirely left out in the cold. In developing Apple Intelligence, it recognized that it wouldnt have Siri ready to handle conversational AI anytime soon and wasnt equipped to handle broader world knowledge questions. So, Apple partnered with OpenAI to bake ChatGPT into iOS 18. Siri will send any requests it cant deal with to ChatGPT, but you can bypass the middleman by simply telling Siri to Ask ChatGPT outright. Its a bit more cumbersome than Gemini Live, but it works well and gets the job done when you have a nagging question you need an answer to.Andy Boxall / Digital TrendsLeaving the chatbots aside, Apple Intelligence and Galaxy AI are more on par when it comes to useful AI features like recording and transcribing phone calls and notes, summarizing blocks of text and even whole web pages, and handling real-time translation. Samsung comes out ahead in its AI photo and video editing features, although iPhone users have plenty of apps to choose from that can do many of the same things, including Google Photos. On the flip side, Apples Image Playground is a handy tool for turning photos and descriptions into cartoony-like AI images, but the Play Store is full of tools that can do the same thing.While Samsung has leaned heavily into AI with the Galaxy S25, the iPhone 16 offers one cool feature that even non-AI fans can enjoy: the Camera Control. This is an extra button on the lower right side of the iPhone that effectively turns your device into an old-school point-and-shoot camera.It may seem like a gimmick at first blush, but weve been having a lot of fun with it as it does so much more than just open the camera and take pictures. The capacitative surface also lets you swipe through various controls to adjust any of the camera settings, from zoom levels to Photographic Styles, and a light-press can be used to lock auto-focus and auto-exposure before pressing all the way down to capture a picture the same way most DLSRs work. The fact that we wrote six paragraphs about it in our iPhone 16 review should give you an idea of how compelling this new feature is.Andy Boxall / Digital TrendsThe Camera Control also ties into an iPhone 16-exclusive AI feature called Visual Intelligence. Pressing and holding the button from the Lock Screen or within nearly any app will switch to a camera view that you can use to get more information on real-world objects from either Google or ChatGPT. This includes identifying breeds of animals or types of flowers, deciphering laundry labels, finding out more information from an event poster, and even automatically adding the date and time to your calendar. You can also translate text between different languages and perform a Google Lens-style search to find other similar items.In addition to the Camera Control, the iPhone 16 also gets the Action button that was exclusive to the iPhone 15 Pro models last year. This replaces Apples classic ring/silent switch with a customizable button that can trigger nearly any action you can think of, thanks to its support for running Apples macro-like Shortcuts.Although Apples AI features remain behind the curve compared to the Galaxy S25, AI isnt a priority for everyone, and the iPhone 16 makes up for it with more versatile controls. Winner: TieNirave Gondhia / Digital TrendsThe Galaxy S25 went on sale on February 7 and can be ordered directly from Samsung or any of the usual online or brick-and-mortar retailers. You should also be able to find it at your carrier of choice. The Galaxy S25 starts at $800 for the base 128GB, with the 256GB version selling for only $60 more. Samsung is offering the usual trade-in programs that can shave up to $550 off those prices. This year, the Galaxy S25 comes in navy, mint, Icy Blue, and Silver Shadow as the standard colors, while those ordering from Samsung can also choose from exclusive Blueblack, Coralred, and Pinkgold finishes.The iPhone 16 was released in September 2024 and starts at $799 for the 128GB model, with 256GB and 512GB versions available for $899 and $1,099, respectively. Apple is also offering trade-ins that can knock up to $630 off the price, but youll need to trade in a recent higher-end model to get that maximum value. The iPhone 16 comes in black, white, pink, teal, and ultramarine.Andy Boxall / Digital TrendsA comparison between an iPhone and just about any other smartphone on the market often comes down to one question: platform. The iPhone 16 and Galaxy S25 may now look more similar than ever on the surface, but they couldnt be more different under the hood. They run fundamentally different operating systems, and chances are that your preference for iOS or Android will dictate your choice here more than anything else.Still, if you have no strong feelings either way or youre looking for a change, both have a lot to offer that could tempt you to come over from the other side.The iPhone 16 has a surprisingly fun and capable camera system for a non-pro model, and the Camera Control button makes this even more of a joy for avid mobile photographers. Whatever you may think of Apple Intelligence, iOS 18 is a solid and mature operating system with a proven track record, and it sticks to its roots this year, so anyone who has used an iPhone in the last decade should still feel right at home. Apples A18 chip offers power to spare and the App Store hosts a collection of strong AAA console games to enjoy, although youll disappointingly be held back by the iPhones archaic 60Hz refresh rate.On the other side, the Galaxy S25 has embraced Gemini Live to offer a voice assistant that can talk circles around Siri, plus a wealth of other Galaxy AI features that are more useful and refined. The camera system is still very capable, even if it feels like Samsung wasnt really trying this year, the display is gorgeous, and the Snapdragon 8 Elite is an insanely powerful chip thats just waiting for the right games to take advantage of everything its souped-up Adreno GPU has to offer.To put the key differences between these two flagships in simpler terms, the Galaxy S25 is for those who want leading-edge AI features, but if photography is your bag, the iPhone 16 will surprise and delight you in myriad creative ways.Editors Recommendations
    0 Commenti ·0 condivisioni ·24 Views
  • Inside Amazons Messy Push to Bring Everyone Back to the Office
    www.wsj.com
    The five-day policy dials back flexibility that predated the pandemic; employees are returning to find they have no desks, not enough parking and still endless virtual meetings
    0 Commenti ·0 condivisioni ·25 Views
  • Competition opens to find the world's most perplexing computer code
    www.newscientist.com
    A 2011 entry to the International Obfuscated C Code Contest, designed to look like a manga characterPGM/PPM images and ASCII art by Don, YangComputer programmers are being challenged to write the worlds sneakiest and most confusing code in a competition that opens next week. To win, entrants must find ways to write programs in the C language that baffle judges on first reading, then perform unusual, unexpected or catastrophic tasks when run.The International Obfuscated C Code Contest (IOCCC) began in 1984 and its co-founder, Landon Noll, says it is the longest-running online competition of any kind.
    0 Commenti ·0 condivisioni ·26 Views
  • The story of ancient Mesopotamia and the dawn of the modern world
    www.newscientist.com
    The Great Zigguratof Ur, in presentday IraqMohammed Al ali/AlamyBetween Two RiversMoudhy Al-Rashid (Hachette (UK, 20 February); W. W. Norton (US, 12 August))A new and spellbinding book tells the history of the very ancient past of Mesopotamia, the land between the rivers Euphrates and Tigris. Between Two Rivers by Moudhy Al-Rashid, a researcher at the University of Oxford, weaves together the many strands of the story of the region, which covers much of what is now Iraq.Ancient Mesopotamia has languished in obscurity, at least compared with the better-known Greek, Roman and Egyptian civilisations. So
    0 Commenti ·0 condivisioni ·24 Views