• A Hacker May Have Deepfaked Trump’s Chief of Staff in a Phishing Campaign

    Plus: An Iranian man pleads guilty to a Baltimore ransomware attack, Russia’s nuclear blueprints get leaked, a Texas sheriff uses license plate readers to track a woman who got an abortion, and more.
    #hacker #have #deepfaked #trumps #chief
    A Hacker May Have Deepfaked Trump’s Chief of Staff in a Phishing Campaign
    Plus: An Iranian man pleads guilty to a Baltimore ransomware attack, Russia’s nuclear blueprints get leaked, a Texas sheriff uses license plate readers to track a woman who got an abortion, and more. #hacker #have #deepfaked #trumps #chief
    WWW.WIRED.COM
    A Hacker May Have Deepfaked Trump’s Chief of Staff in a Phishing Campaign
    Plus: An Iranian man pleads guilty to a Baltimore ransomware attack, Russia’s nuclear blueprints get leaked, a Texas sheriff uses license plate readers to track a woman who got an abortion, and more.
    0 Comments 0 Shares
  • OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit

    "I can't imagine I'm the first, and I'm definitely not the last."Face RipoffAn OnlyFans creator is speaking out after discovering that her photos were stolen by someone who used deepfake tech to give her a completely new face — and posted the deepfaked images all over Reddit.As 25-year-old, UK-based OnlyFans creator Bunni told Mashable, image theft is a common occurrence in her field. Usually, though, catfishers would steal and share Bunni's image without alterations.In this case, the grift was sneakier. With the help of deepfake tools, a scammer crafted an entirely new persona named "Sofía," an alleged 19-year-old in Spain who had Bunni's body — but an AI-generated face.It was "a completely different way of doing it that I've not had happen to me before," Bunni, who posted a video about the theft on Instagram back in February, told Mashable. "It was just, like, really weird."It's only the latest instance of a baffling trend, with "virtual influencers" pasting fake faces onto the bodies of real models and sex workers to sell bogus subscriptions and swindle netizens.Head SwapUsing the fake Sofía persona, the scammer flooded forums across Reddit with fake images and color commentary. Sometimes, the posts were mundane; "Sofía" asked for outfit advice and, per Mashable, even shared photos of pets. But Sofía also posted images to r/PunkGirls, a pornographic subreddit.Sofía never shared a link to another OnlyFans page, though Bunni suspects that the scammer might have been looking to chat with targets via direct messages, where they might have been passing around an OnlyFans link or requesting cash. And though Bunni was able to get the imposter kicked off of Reddit after reaching out directly to moderators, her story emphasizes how easy it is for catfishers to combine AI with stolen content to easily make and distribute convincing fakes."I can't imagine I'm the first, and I'm definitely not the last, because this whole AI thing is kind of blowing out of proportion," Bunni told Mashable. "So I can't imagine it's going to slow down."As Mashable notes, Bunni was somewhat of a perfect target: she has fans, but she's not famous enough to trigger immediate or widespread recognition. And for a creator like Bunni, pursuing legal action might not be a feasible or even worthwhile option. It's expensive, and right now, the law itself is still catching up."I don't feel like it's really worth it," Bunni told Mashable. "The amount you pay for legal action is just ridiculous, and you probably wouldn't really get anywhere anyway, to be honest."Reddit, for its part, didn't respond to Mashable's request for comment.More on deepfakes: Gross AI Apps Create Videos of People Kissing Without Their ConsentShare This Article
    #onlyfans #model #shocked #after #finding
    OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit
    "I can't imagine I'm the first, and I'm definitely not the last."Face RipoffAn OnlyFans creator is speaking out after discovering that her photos were stolen by someone who used deepfake tech to give her a completely new face — and posted the deepfaked images all over Reddit.As 25-year-old, UK-based OnlyFans creator Bunni told Mashable, image theft is a common occurrence in her field. Usually, though, catfishers would steal and share Bunni's image without alterations.In this case, the grift was sneakier. With the help of deepfake tools, a scammer crafted an entirely new persona named "Sofía," an alleged 19-year-old in Spain who had Bunni's body — but an AI-generated face.It was "a completely different way of doing it that I've not had happen to me before," Bunni, who posted a video about the theft on Instagram back in February, told Mashable. "It was just, like, really weird."It's only the latest instance of a baffling trend, with "virtual influencers" pasting fake faces onto the bodies of real models and sex workers to sell bogus subscriptions and swindle netizens.Head SwapUsing the fake Sofía persona, the scammer flooded forums across Reddit with fake images and color commentary. Sometimes, the posts were mundane; "Sofía" asked for outfit advice and, per Mashable, even shared photos of pets. But Sofía also posted images to r/PunkGirls, a pornographic subreddit.Sofía never shared a link to another OnlyFans page, though Bunni suspects that the scammer might have been looking to chat with targets via direct messages, where they might have been passing around an OnlyFans link or requesting cash. And though Bunni was able to get the imposter kicked off of Reddit after reaching out directly to moderators, her story emphasizes how easy it is for catfishers to combine AI with stolen content to easily make and distribute convincing fakes."I can't imagine I'm the first, and I'm definitely not the last, because this whole AI thing is kind of blowing out of proportion," Bunni told Mashable. "So I can't imagine it's going to slow down."As Mashable notes, Bunni was somewhat of a perfect target: she has fans, but she's not famous enough to trigger immediate or widespread recognition. And for a creator like Bunni, pursuing legal action might not be a feasible or even worthwhile option. It's expensive, and right now, the law itself is still catching up."I don't feel like it's really worth it," Bunni told Mashable. "The amount you pay for legal action is just ridiculous, and you probably wouldn't really get anywhere anyway, to be honest."Reddit, for its part, didn't respond to Mashable's request for comment.More on deepfakes: Gross AI Apps Create Videos of People Kissing Without Their ConsentShare This Article #onlyfans #model #shocked #after #finding
    FUTURISM.COM
    OnlyFans Model Shocked After Finding Her Pictures With AI-Swapped Faces on Reddit
    "I can't imagine I'm the first, and I'm definitely not the last."Face RipoffAn OnlyFans creator is speaking out after discovering that her photos were stolen by someone who used deepfake tech to give her a completely new face — and posted the deepfaked images all over Reddit.As 25-year-old, UK-based OnlyFans creator Bunni told Mashable, image theft is a common occurrence in her field. Usually, though, catfishers would steal and share Bunni's image without alterations.In this case, the grift was sneakier. With the help of deepfake tools, a scammer crafted an entirely new persona named "Sofía," an alleged 19-year-old in Spain who had Bunni's body — but an AI-generated face.It was "a completely different way of doing it that I've not had happen to me before," Bunni, who posted a video about the theft on Instagram back in February, told Mashable. "It was just, like, really weird."It's only the latest instance of a baffling trend, with "virtual influencers" pasting fake faces onto the bodies of real models and sex workers to sell bogus subscriptions and swindle netizens.Head SwapUsing the fake Sofía persona, the scammer flooded forums across Reddit with fake images and color commentary. Sometimes, the posts were mundane; "Sofía" asked for outfit advice and, per Mashable, even shared photos of pets. But Sofía also posted images to r/PunkGirls, a pornographic subreddit.Sofía never shared a link to another OnlyFans page, though Bunni suspects that the scammer might have been looking to chat with targets via direct messages, where they might have been passing around an OnlyFans link or requesting cash. And though Bunni was able to get the imposter kicked off of Reddit after reaching out directly to moderators, her story emphasizes how easy it is for catfishers to combine AI with stolen content to easily make and distribute convincing fakes."I can't imagine I'm the first, and I'm definitely not the last, because this whole AI thing is kind of blowing out of proportion," Bunni told Mashable. "So I can't imagine it's going to slow down."As Mashable notes, Bunni was somewhat of a perfect target: she has fans, but she's not famous enough to trigger immediate or widespread recognition. And for a creator like Bunni, pursuing legal action might not be a feasible or even worthwhile option. It's expensive, and right now, the law itself is still catching up."I don't feel like it's really worth it," Bunni told Mashable. "The amount you pay for legal action is just ridiculous, and you probably wouldn't really get anywhere anyway, to be honest."Reddit, for its part, didn't respond to Mashable's request for comment.More on deepfakes: Gross AI Apps Create Videos of People Kissing Without Their ConsentShare This Article
    0 Comments 0 Shares
  • FBI warns of ongoing scam that uses deepfake audio to impersonate government officials

    IS IT REAL OR IS IT AI-GENERATED?

    FBI warns of ongoing scam that uses deepfake audio to impersonate government officials

    Warning comes as the use of deepfakes in the wild are rising.

    Dan Goodin



    May 15, 2025 5:06 pm

    |

    18

    Credit:

    Getty Images

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    The FBI is warning people to be vigilant of an ongoing malicious messaging campaign that uses AI-generated voice audio to impersonate government officials in an attempt to trick recipients into clicking on links that can infect their computers.
    “Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” Thursday’s advisory from the bureau’s Internet Crime Complaint Center said. “If you receive a message claiming to be from a senior US official, do not assume it is authentic.”
    Think you can’t be fooled? Think again.
    The campaign's creators are sending AI-generated voice messages—better known as deepfakes—along with text messages “in an effort to establish rapport before gaining access to personal accounts,” FBI officials said. Deepfakes use AI to mimic the voice and speaking characteristics of a specific individual. The differences between the authentic and simulated speakers are often indistinguishable without trained analysis. Deepfake videos work similarly.
    One way to gain access to targets' devices is for the attacker to ask if the conversation can be continued on a separate messaging platform and then successfully convince the target to click on a malicious link under the guise that it will enable the alternate platform. The advisory provided no additional details about the campaign.
    The advisory comes amid a rise in reports of deepfaked audio and sometimes video used in fraud and espionage campaigns. Last year, password manager LastPass warned that it had been targeted in a sophisticated phishing campaign that used a combination of email, text messages, and voice calls to trick targets into divulging their master passwords. One part of the campaign included targeting a LastPass employee with a deepfake audio call that impersonated company CEO Karim Toubba.
    In a separate incident last year, a robocall campaign that encouraged New Hampshire Democrats to sit out the coming election used a deepfake of then-President Joe Biden’s voice. A Democratic consultant was later indicted in connection with the calls. The telco that transmitted the spoofed robocalls also agreed to pay a million civil penalty for not authenticating the caller as required by FCC rules.

    Thursday’s advisory provided steps people can take to better detect these sorts of malicious messaging campaigns. They include:

    Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
    Carefully examine the email address; messaging contact information, including phone numbers; URLs; and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact.
    Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements.
    Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical.
    AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.

    The guidance is helpful, but it doesn't take into account some of the challenges targets of such scams face. Often, the senders create a sense of urgency by claiming there is some sort of ongoing emergency that requires an immediate response. It's also not clear how people can reliably confirm that phone numbers, email addresses, or URLs are authentic.
    The bottom line is that there is no magic bullet to ward off these sorts of scams. Admitting that no one is immune to being fooled is key to defending against them.

    Dan Goodin
    Senior Security Editor

    Dan Goodin
    Senior Security Editor

    Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

    18 Comments
    #fbi #warns #ongoing #scam #that
    FBI warns of ongoing scam that uses deepfake audio to impersonate government officials
    IS IT REAL OR IS IT AI-GENERATED? FBI warns of ongoing scam that uses deepfake audio to impersonate government officials Warning comes as the use of deepfakes in the wild are rising. Dan Goodin – May 15, 2025 5:06 pm | 18 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The FBI is warning people to be vigilant of an ongoing malicious messaging campaign that uses AI-generated voice audio to impersonate government officials in an attempt to trick recipients into clicking on links that can infect their computers. “Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” Thursday’s advisory from the bureau’s Internet Crime Complaint Center said. “If you receive a message claiming to be from a senior US official, do not assume it is authentic.” Think you can’t be fooled? Think again. The campaign's creators are sending AI-generated voice messages—better known as deepfakes—along with text messages “in an effort to establish rapport before gaining access to personal accounts,” FBI officials said. Deepfakes use AI to mimic the voice and speaking characteristics of a specific individual. The differences between the authentic and simulated speakers are often indistinguishable without trained analysis. Deepfake videos work similarly. One way to gain access to targets' devices is for the attacker to ask if the conversation can be continued on a separate messaging platform and then successfully convince the target to click on a malicious link under the guise that it will enable the alternate platform. The advisory provided no additional details about the campaign. The advisory comes amid a rise in reports of deepfaked audio and sometimes video used in fraud and espionage campaigns. Last year, password manager LastPass warned that it had been targeted in a sophisticated phishing campaign that used a combination of email, text messages, and voice calls to trick targets into divulging their master passwords. One part of the campaign included targeting a LastPass employee with a deepfake audio call that impersonated company CEO Karim Toubba. In a separate incident last year, a robocall campaign that encouraged New Hampshire Democrats to sit out the coming election used a deepfake of then-President Joe Biden’s voice. A Democratic consultant was later indicted in connection with the calls. The telco that transmitted the spoofed robocalls also agreed to pay a million civil penalty for not authenticating the caller as required by FCC rules. Thursday’s advisory provided steps people can take to better detect these sorts of malicious messaging campaigns. They include: Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity. Carefully examine the email address; messaging contact information, including phone numbers; URLs; and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact. Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements. Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical. AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help. The guidance is helpful, but it doesn't take into account some of the challenges targets of such scams face. Often, the senders create a sense of urgency by claiming there is some sort of ongoing emergency that requires an immediate response. It's also not clear how people can reliably confirm that phone numbers, email addresses, or URLs are authentic. The bottom line is that there is no magic bullet to ward off these sorts of scams. Admitting that no one is immune to being fooled is key to defending against them. Dan Goodin Senior Security Editor Dan Goodin Senior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82. 18 Comments #fbi #warns #ongoing #scam #that
    ARSTECHNICA.COM
    FBI warns of ongoing scam that uses deepfake audio to impersonate government officials
    IS IT REAL OR IS IT AI-GENERATED? FBI warns of ongoing scam that uses deepfake audio to impersonate government officials Warning comes as the use of deepfakes in the wild are rising. Dan Goodin – May 15, 2025 5:06 pm | 18 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The FBI is warning people to be vigilant of an ongoing malicious messaging campaign that uses AI-generated voice audio to impersonate government officials in an attempt to trick recipients into clicking on links that can infect their computers. “Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” Thursday’s advisory from the bureau’s Internet Crime Complaint Center said. “If you receive a message claiming to be from a senior US official, do not assume it is authentic.” Think you can’t be fooled? Think again. The campaign's creators are sending AI-generated voice messages—better known as deepfakes—along with text messages “in an effort to establish rapport before gaining access to personal accounts,” FBI officials said. Deepfakes use AI to mimic the voice and speaking characteristics of a specific individual. The differences between the authentic and simulated speakers are often indistinguishable without trained analysis. Deepfake videos work similarly. One way to gain access to targets' devices is for the attacker to ask if the conversation can be continued on a separate messaging platform and then successfully convince the target to click on a malicious link under the guise that it will enable the alternate platform. The advisory provided no additional details about the campaign. The advisory comes amid a rise in reports of deepfaked audio and sometimes video used in fraud and espionage campaigns. Last year, password manager LastPass warned that it had been targeted in a sophisticated phishing campaign that used a combination of email, text messages, and voice calls to trick targets into divulging their master passwords. One part of the campaign included targeting a LastPass employee with a deepfake audio call that impersonated company CEO Karim Toubba. In a separate incident last year, a robocall campaign that encouraged New Hampshire Democrats to sit out the coming election used a deepfake of then-President Joe Biden’s voice. A Democratic consultant was later indicted in connection with the calls. The telco that transmitted the spoofed robocalls also agreed to pay a $1 million civil penalty for not authenticating the caller as required by FCC rules. Thursday’s advisory provided steps people can take to better detect these sorts of malicious messaging campaigns. They include: Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity. Carefully examine the email address; messaging contact information, including phone numbers; URLs; and spelling used in any correspondence or communications. Scammers often use slight differences to deceive you and gain your trust. For instance, actors can incorporate publicly available photographs in text messages, use minor alterations in names and contact information, or use AI-generated voices to masquerade as a known contact. Look for subtle imperfections in images and videos, such as distorted hands or feet, unrealistic facial features, indistinct or irregular faces, unrealistic accessories such as glasses or jewelry, inaccurate shadows, watermarks, voice call lag time, voice matching, and unnatural movements. Listen closely to the tone and word choice to distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical. AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help. The guidance is helpful, but it doesn't take into account some of the challenges targets of such scams face. Often, the senders create a sense of urgency by claiming there is some sort of ongoing emergency that requires an immediate response. It's also not clear how people can reliably confirm that phone numbers, email addresses, or URLs are authentic. The bottom line is that there is no magic bullet to ward off these sorts of scams. Admitting that no one is immune to being fooled is key to defending against them. Dan Goodin Senior Security Editor Dan Goodin Senior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82. 18 Comments
    0 Comments 0 Shares
  • The Morning After: Samsung’s Galaxy S25 Edge is $1,100 and thin

    Samsung’s long-teased Galaxy S25 Edge has arrived, way ahead of the rumored iPhone Air.
    It’s a very S25-looking device, but the company is pitching it as a design-centric addition to its, let’s admit, bulging S25 family.
    The S25 Edge’s body is 5.8 millimeters (0.22 inches) thick if we ignore the camera bump like everyone else does.
    Granted, it’s not a huge bump.
    Samsung says it engineered the lenses to be substantially thinner than those on the S25 Ultra while keeping the same 200-megapixel camera sensor.
    And there are only two cameras on the back this time.
    Gasp! Unfortunately, Samsung has gone for an ultrawide secondary shooter rather than a telephoto, likely due to the handset's size constraints.
    Image by Mat Smith for Engadget
    This makes the S25 Edge the latest addition to the trend of fewer cameras, joining the Pixel 9a, but for a very different $1,100.
    You can check out my first impressions and all the crucial specs in my hands-on.
    Are you willing to handle possible battery life decreases and less zoom on your smartphone camera?
    — Mat Smith
    Get Engadget's newsletter delivered direct to your inbox.
    Subscribe right here!
    Even more Switch 2 stuff

    Ticketmaster proudly announces it will follow the law and show prices up-front
    Jamie Lee Curtis publicly shamed Mark Zuckerberg to remove a deepfaked ad
    How to pre-order the Samsung Galaxy S25 Edge
    Philips Fixables will let you 3D print replacement parts for your electric razors and trimmers


    iOS 18.5 arrives with a new wallpaper for Pride Month
    And not much else.

    Apple pushed iOS 18.5 to devices on Monday, and the biggest visual change is a new rainbow-shaded wallpaper in honor of Pride Month.
    I’m honored.
    Otherwise, it’s a few minor tweaks and bug fixes.
    Continue reading.

    You can actually turn lead into gold
    All you need is a Large Hadron Collider.
    Bones
    Scientists with the European Organization for Nuclear Research, better known as CERN, have converted lead into gold using the Large Hadron Collider (LHC).
    Unlike the examples of transmutation we see in anime pop culture, scientists smashed subatomic particles together at ridiculously high speeds to manipulate lead’s physical properties to become gold.
    Briefly.
    Lead atoms only have three more protons than gold atoms.
    The LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles.
    Continue reading.

    The only thing I want from Apple’s big 2025 redesign is a
    That’s a, not α.
    Apple

    This is where Deputy Editor Nathan Ingraham decries one of Apple’s latest design quirks.
    For over 600 words.
    Apple’s decision to use α instead of a in its Note App has got him mad. 
    We’ve reached out to check if he’s OK.
    Continue reading.This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-111526456.html?src=rss
    Source: https://www.engadget.com/general/the-morning-after-engadget-newsletter-111526456.html?src=rss
    #morning #samsungs #galaxy #s25 #edge #thin
    The Morning After: Samsung’s Galaxy S25 Edge is $1,100 and thin
    Samsung’s long-teased Galaxy S25 Edge has arrived, way ahead of the rumored iPhone Air. It’s a very S25-looking device, but the company is pitching it as a design-centric addition to its, let’s admit, bulging S25 family. The S25 Edge’s body is 5.8 millimeters (0.22 inches) thick if we ignore the camera bump like everyone else does. Granted, it’s not a huge bump. Samsung says it engineered the lenses to be substantially thinner than those on the S25 Ultra while keeping the same 200-megapixel camera sensor. And there are only two cameras on the back this time. Gasp! Unfortunately, Samsung has gone for an ultrawide secondary shooter rather than a telephoto, likely due to the handset's size constraints. Image by Mat Smith for Engadget This makes the S25 Edge the latest addition to the trend of fewer cameras, joining the Pixel 9a, but for a very different $1,100. You can check out my first impressions and all the crucial specs in my hands-on. Are you willing to handle possible battery life decreases and less zoom on your smartphone camera? — Mat Smith Get Engadget's newsletter delivered direct to your inbox. Subscribe right here! Even more Switch 2 stuff Ticketmaster proudly announces it will follow the law and show prices up-front Jamie Lee Curtis publicly shamed Mark Zuckerberg to remove a deepfaked ad How to pre-order the Samsung Galaxy S25 Edge Philips Fixables will let you 3D print replacement parts for your electric razors and trimmers iOS 18.5 arrives with a new wallpaper for Pride Month And not much else. Apple pushed iOS 18.5 to devices on Monday, and the biggest visual change is a new rainbow-shaded wallpaper in honor of Pride Month. I’m honored. Otherwise, it’s a few minor tweaks and bug fixes. Continue reading. You can actually turn lead into gold All you need is a Large Hadron Collider. Bones Scientists with the European Organization for Nuclear Research, better known as CERN, have converted lead into gold using the Large Hadron Collider (LHC). Unlike the examples of transmutation we see in anime pop culture, scientists smashed subatomic particles together at ridiculously high speeds to manipulate lead’s physical properties to become gold. Briefly. Lead atoms only have three more protons than gold atoms. The LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles. Continue reading. The only thing I want from Apple’s big 2025 redesign is a That’s a, not α. Apple This is where Deputy Editor Nathan Ingraham decries one of Apple’s latest design quirks. For over 600 words. Apple’s decision to use α instead of a in its Note App has got him mad.  We’ve reached out to check if he’s OK. Continue reading.This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-111526456.html?src=rss Source: https://www.engadget.com/general/the-morning-after-engadget-newsletter-111526456.html?src=rss #morning #samsungs #galaxy #s25 #edge #thin
    WWW.ENGADGET.COM
    The Morning After: Samsung’s Galaxy S25 Edge is $1,100 and thin
    Samsung’s long-teased Galaxy S25 Edge has arrived, way ahead of the rumored iPhone Air. It’s a very S25-looking device, but the company is pitching it as a design-centric addition to its, let’s admit, bulging S25 family. The S25 Edge’s body is 5.8 millimeters (0.22 inches) thick if we ignore the camera bump like everyone else does. Granted, it’s not a huge bump. Samsung says it engineered the lenses to be substantially thinner than those on the S25 Ultra while keeping the same 200-megapixel camera sensor. And there are only two cameras on the back this time. Gasp! Unfortunately, Samsung has gone for an ultrawide secondary shooter rather than a telephoto, likely due to the handset's size constraints. Image by Mat Smith for Engadget This makes the S25 Edge the latest addition to the trend of fewer cameras, joining the Pixel 9a, but for a very different $1,100. You can check out my first impressions and all the crucial specs in my hands-on. Are you willing to handle possible battery life decreases and less zoom on your smartphone camera? — Mat Smith Get Engadget's newsletter delivered direct to your inbox. Subscribe right here! Even more Switch 2 stuff Ticketmaster proudly announces it will follow the law and show prices up-front Jamie Lee Curtis publicly shamed Mark Zuckerberg to remove a deepfaked ad How to pre-order the Samsung Galaxy S25 Edge Philips Fixables will let you 3D print replacement parts for your electric razors and trimmers iOS 18.5 arrives with a new wallpaper for Pride Month And not much else. Apple pushed iOS 18.5 to devices on Monday, and the biggest visual change is a new rainbow-shaded wallpaper in honor of Pride Month. I’m honored. Otherwise, it’s a few minor tweaks and bug fixes. Continue reading. You can actually turn lead into gold All you need is a Large Hadron Collider. Bones Scientists with the European Organization for Nuclear Research, better known as CERN, have converted lead into gold using the Large Hadron Collider (LHC). Unlike the examples of transmutation we see in anime pop culture, scientists smashed subatomic particles together at ridiculously high speeds to manipulate lead’s physical properties to become gold. Briefly. Lead atoms only have three more protons than gold atoms. The LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles. Continue reading. The only thing I want from Apple’s big 2025 redesign is a That’s a, not α. Apple This is where Deputy Editor Nathan Ingraham decries one of Apple’s latest design quirks. For over 600 words. Apple’s decision to use α instead of a in its Note App has got him mad.  We’ve reached out to check if he’s OK. Continue reading.This article originally appeared on Engadget at https://www.engadget.com/general/the-morning-after-engadget-newsletter-111526456.html?src=rss
    0 Comments 0 Shares