• The ongoing debacle surrounding Subnautica 2 is nothing short of infuriating! The former leadership of Unknown Worlds has publicly accused Krafton of attempting to sabotage the game. Seriously, how low can a publisher go? Instead of fostering creativity and innovation, Krafton seems hell-bent on destroying what could have been an amazing sequel. This blatant disregard for the developers' hard work is unacceptable! Fans deserve better than this corporate nonsense. It's time for the gaming community to wake up and hold these publishers accountable for their reckless actions that threaten the integrity of beloved titles.

    #Subnautica2 #UnknownWorlds #Krafton #GamingNews #GameDevelopment
    The ongoing debacle surrounding Subnautica 2 is nothing short of infuriating! The former leadership of Unknown Worlds has publicly accused Krafton of attempting to sabotage the game. Seriously, how low can a publisher go? Instead of fostering creativity and innovation, Krafton seems hell-bent on destroying what could have been an amazing sequel. This blatant disregard for the developers' hard work is unacceptable! Fans deserve better than this corporate nonsense. It's time for the gaming community to wake up and hold these publishers accountable for their reckless actions that threaten the integrity of beloved titles. #Subnautica2 #UnknownWorlds #Krafton #GamingNews #GameDevelopment
    WWW.ACTUGAMING.NET
    Subnautica 2 : L’ancienne direction du studio Unknown Worlds accuse l’éditeur Krafton d’avoir voulu saboter le jeu
    ActuGaming.net Subnautica 2 : L’ancienne direction du studio Unknown Worlds accuse l’éditeur Krafton d’avoir voulu saboter le jeu Le feuilleton Subnautica 2 se poursuit. Il y a peu, les trois dirigeants du projet […] L'articl
    Like
    Love
    Wow
    Sad
    Angry
    35
    1 Commentarii 0 Distribuiri 0 previzualizare
  • In the latest episode of the Game Developer Podcast, they talk about Sabotage Studio and their strategy for balancing creativity with sustainability. Thierry Boulanger shares how they manage to thrive by focusing on retro-themed indie games for a niche audience. It seems like a decent approach, but honestly, it’s all a bit repetitive. The whole creativity versus sustainability thing is just something everyone says these days. Anyway, if you're into indie games, you might find it interesting. Or not.

    #GameDevelopment
    #IndieGames
    #RetroGaming
    #Sustainability
    #Creativity
    In the latest episode of the Game Developer Podcast, they talk about Sabotage Studio and their strategy for balancing creativity with sustainability. Thierry Boulanger shares how they manage to thrive by focusing on retro-themed indie games for a niche audience. It seems like a decent approach, but honestly, it’s all a bit repetitive. The whole creativity versus sustainability thing is just something everyone says these days. Anyway, if you're into indie games, you might find it interesting. Or not. #GameDevelopment #IndieGames #RetroGaming #Sustainability #Creativity
    Exploring Sabotage Studio's strategy for balancing creativity and sustainability - Game Developer Podcast Ep. 50
    In this episode, Thierry Boulanger discusses how a company like Sabotage Studio can thrive off of building games for a niche audience: specifically retro-themed indie games.
    1 Commentarii 0 Distribuiri 0 previzualizare
  • What a world we live in when scientists finally unlock the secrets to the axolotls' ability to regenerate limbs, only to reveal that the key lies not in some miraculous regrowth molecule, but in its controlled destruction! Seriously, what kind of twisted logic is this? Are we supposed to celebrate the fact that the secret to regeneration is, in fact, about knowing when to destroy something instead of nurturing and encouraging growth? This revelation is not just baffling; it's downright infuriating!

    In an age where regenerative medicine holds the promise of healing wounds and restoring functionality, we are faced with the shocking realization that the science is not about building up, but rather about tearing down. Why would we ever want to focus on the destruction of growth molecules instead of creating an environment where regeneration can bloom unimpeded? Where is the inspiration in that? It feels like a slap in the face to anyone who believes in the potential of science to improve lives!

    Moreover, can we talk about the implications of this discovery? If the key to regeneration involves a meticulous dance of destruction, what does that say about our approach to medical advancements? Are we really expected to just stand by and accept that we must embrace an idea that says, "let's get rid of the good stuff to allow for growth"? This is not just a minor flaw in reasoning; it's a fundamental misunderstanding of what regeneration should mean for us!

    To make matters worse, this revelation could lead to misguided practices in regenerative medicine. Instead of developing therapies that promote healing and growth, we could end up with treatments that focus on the elimination of beneficial molecules. This is absolutely unacceptable! How dare the scientific community suggest that the way forward is through destruction rather than cultivation? We should be demanding more from our researchers, not less!

    Let’s not forget the ethical implications. If the path to regeneration is paved with the controlled destruction of vital components, how can we trust the outcomes? We’re putting lives in the hands of a process that promotes destruction. Just imagine the future of medicine being dictated by a philosophy that sounds more like a dystopian nightmare than a beacon of hope.

    It is high time we hold scientists accountable for the direction they are taking in regenerative research. We need a shift in focus that prioritizes constructive growth, not destructive measures. If we are serious about advancing regenerative medicine, we must reject this flawed notion and demand a commitment to genuine regeneration—the kind that nurtures life, rather than sabotages it.

    Let’s raise our voices against this madness. We deserve better than a science that advocates for destruction as the means to an end. The axolotls may thrive on this paradox, but we, as humans, should expect far more from our scientific endeavors.

    #RegenerativeMedicine #Axolotl #ScienceFail #MedicalEthics #Innovation
    What a world we live in when scientists finally unlock the secrets to the axolotls' ability to regenerate limbs, only to reveal that the key lies not in some miraculous regrowth molecule, but in its controlled destruction! Seriously, what kind of twisted logic is this? Are we supposed to celebrate the fact that the secret to regeneration is, in fact, about knowing when to destroy something instead of nurturing and encouraging growth? This revelation is not just baffling; it's downright infuriating! In an age where regenerative medicine holds the promise of healing wounds and restoring functionality, we are faced with the shocking realization that the science is not about building up, but rather about tearing down. Why would we ever want to focus on the destruction of growth molecules instead of creating an environment where regeneration can bloom unimpeded? Where is the inspiration in that? It feels like a slap in the face to anyone who believes in the potential of science to improve lives! Moreover, can we talk about the implications of this discovery? If the key to regeneration involves a meticulous dance of destruction, what does that say about our approach to medical advancements? Are we really expected to just stand by and accept that we must embrace an idea that says, "let's get rid of the good stuff to allow for growth"? This is not just a minor flaw in reasoning; it's a fundamental misunderstanding of what regeneration should mean for us! To make matters worse, this revelation could lead to misguided practices in regenerative medicine. Instead of developing therapies that promote healing and growth, we could end up with treatments that focus on the elimination of beneficial molecules. This is absolutely unacceptable! How dare the scientific community suggest that the way forward is through destruction rather than cultivation? We should be demanding more from our researchers, not less! Let’s not forget the ethical implications. If the path to regeneration is paved with the controlled destruction of vital components, how can we trust the outcomes? We’re putting lives in the hands of a process that promotes destruction. Just imagine the future of medicine being dictated by a philosophy that sounds more like a dystopian nightmare than a beacon of hope. It is high time we hold scientists accountable for the direction they are taking in regenerative research. We need a shift in focus that prioritizes constructive growth, not destructive measures. If we are serious about advancing regenerative medicine, we must reject this flawed notion and demand a commitment to genuine regeneration—the kind that nurtures life, rather than sabotages it. Let’s raise our voices against this madness. We deserve better than a science that advocates for destruction as the means to an end. The axolotls may thrive on this paradox, but we, as humans, should expect far more from our scientific endeavors. #RegenerativeMedicine #Axolotl #ScienceFail #MedicalEthics #Innovation
    Scientists Discover the Key to Axolotls’ Ability to Regenerate Limbs
    A new study reveals the key lies not in the production of a regrowth molecule, but in that molecule's controlled destruction. The discovery could inspire future regenerative medicine.
    Like
    Love
    Wow
    Sad
    Angry
    586
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Fortnite Is Down: When Will Fortnite Servers Be Back Up For Chapter 6 Season 3?

    Following the spectacular Death Star Sabotage season finale event, Fortnite is offline for maintenance ahead of the launch of Chapter 6 Season 3. The Emperor has been defeated, the Empire has been run off the Fortnite island yet again, and after five short weeks of Star Wars mayhem, it's time for Fortnite to move on to its summer season, which has only just slightly been officially revealed--more on that in a moment.Star Wars season was an unorthodox one for Epic Games, which had never dropped a season quite like this one before--past mini seasons were throwbacks, rather than full-throated collaboration events, though last year's Remix season blurred that line. But now that it's over, Fortnite is going back to a more standard season for the summer. And we won't have long to wait for it to get started.Fortnite Chapter 6 Season 3 start timeEvery previous time that a Fortnite season ended with a live event on a Saturday, the game went offline until at least Sunday morning. But this time, it seems as though Season 3 will instead be launching tonight.Continue Reading at GameSpot
    #fortnite #down #when #will #servers
    Fortnite Is Down: When Will Fortnite Servers Be Back Up For Chapter 6 Season 3?
    Following the spectacular Death Star Sabotage season finale event, Fortnite is offline for maintenance ahead of the launch of Chapter 6 Season 3. The Emperor has been defeated, the Empire has been run off the Fortnite island yet again, and after five short weeks of Star Wars mayhem, it's time for Fortnite to move on to its summer season, which has only just slightly been officially revealed--more on that in a moment.Star Wars season was an unorthodox one for Epic Games, which had never dropped a season quite like this one before--past mini seasons were throwbacks, rather than full-throated collaboration events, though last year's Remix season blurred that line. But now that it's over, Fortnite is going back to a more standard season for the summer. And we won't have long to wait for it to get started.Fortnite Chapter 6 Season 3 start timeEvery previous time that a Fortnite season ended with a live event on a Saturday, the game went offline until at least Sunday morning. But this time, it seems as though Season 3 will instead be launching tonight.Continue Reading at GameSpot #fortnite #down #when #will #servers
    WWW.GAMESPOT.COM
    Fortnite Is Down: When Will Fortnite Servers Be Back Up For Chapter 6 Season 3?
    Following the spectacular Death Star Sabotage season finale event, Fortnite is offline for maintenance ahead of the launch of Chapter 6 Season 3. The Emperor has been defeated, the Empire has been run off the Fortnite island yet again, and after five short weeks of Star Wars mayhem, it's time for Fortnite to move on to its summer season, which has only just slightly been officially revealed--more on that in a moment.Star Wars season was an unorthodox one for Epic Games, which had never dropped a season quite like this one before--past mini seasons were throwbacks, rather than full-throated collaboration events, though last year's Remix season blurred that line. But now that it's over, Fortnite is going back to a more standard season for the summer. And we won't have long to wait for it to get started.Fortnite Chapter 6 Season 3 start timeEvery previous time that a Fortnite season ended with a live event on a Saturday, the game went offline until at least Sunday morning. But this time, it seems as though Season 3 will instead be launching tonight.Continue Reading at GameSpot
    Like
    Love
    Wow
    Angry
    Sad
    649
    0 Commentarii 0 Distribuiri 0 previzualizare
  • All Fortnite Season 3 passes were just leaked

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    For the first time in Fortnite history, Epic Games will release three seasonal passes on the same day. The new season of the Battle Royale mode will introduce another big pass featuring numerous cosmetics and V-Bucks. On the same day, the game developer will also release a new LEGO Pass and the OG Pass. All of them will be included in the Fortnite Crew subscription but also available separately.
    On Thursday night, a massive leak revealed everything that’s coming on Saturday. Each seasonal pass was leaked, and now we know most of the skins that will come with them. The leaked images also confirm the theme of the next season.
    What will the next Fortnite Battle Pass look like?
    As previously leaked, the next Fortnite Battle Pass will bring another Superman skin. This popular DC character was first released in Chapter 2, but Epic will release another variant on Saturday. In addition to him, the Fortnite developer will release Robin and a few more superhero skins.
    The OG Pass will bring remixed variants of Teknique, Omega, and The Visitor. Finally, the new LEGO Pass will bring a new skin, while the rest of the items in the pass will mostly be decor bundles for LEGO Fortnite.
    The next Fortnite season will bring three new seasonal passes. Image by VideoGamer
    Fortnite Crew subscribers will instantly get access to all of these three passes on the first day of the season. They will also be available separately, with the Battle Pass and the OG Pass costing 1,000 V-Bucks each, and the LEGO Pass having a 1,800 V-Bucks price tag.
    The next Fortnite season is set to come out on Saturday, June 7. The season will be released a few hours after the Death Star Sabotage live event, which begins at 2 PM Eastern Time.

    Fortnite

    Platform:
    Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X

    Genre:
    Action, Massively Multiplayer, Shooter

    9
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #all #fortnite #season #passes #were
    All Fortnite Season 3 passes were just leaked
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here For the first time in Fortnite history, Epic Games will release three seasonal passes on the same day. The new season of the Battle Royale mode will introduce another big pass featuring numerous cosmetics and V-Bucks. On the same day, the game developer will also release a new LEGO Pass and the OG Pass. All of them will be included in the Fortnite Crew subscription but also available separately. On Thursday night, a massive leak revealed everything that’s coming on Saturday. Each seasonal pass was leaked, and now we know most of the skins that will come with them. The leaked images also confirm the theme of the next season. What will the next Fortnite Battle Pass look like? As previously leaked, the next Fortnite Battle Pass will bring another Superman skin. This popular DC character was first released in Chapter 2, but Epic will release another variant on Saturday. In addition to him, the Fortnite developer will release Robin and a few more superhero skins. The OG Pass will bring remixed variants of Teknique, Omega, and The Visitor. Finally, the new LEGO Pass will bring a new skin, while the rest of the items in the pass will mostly be decor bundles for LEGO Fortnite. The next Fortnite season will bring three new seasonal passes. Image by VideoGamer Fortnite Crew subscribers will instantly get access to all of these three passes on the first day of the season. They will also be available separately, with the Battle Pass and the OG Pass costing 1,000 V-Bucks each, and the LEGO Pass having a 1,800 V-Bucks price tag. The next Fortnite season is set to come out on Saturday, June 7. The season will be released a few hours after the Death Star Sabotage live event, which begins at 2 PM Eastern Time. Fortnite Platform: Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre: Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #all #fortnite #season #passes #were
    WWW.VIDEOGAMER.COM
    All Fortnite Season 3 passes were just leaked
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here For the first time in Fortnite history, Epic Games will release three seasonal passes on the same day. The new season of the Battle Royale mode will introduce another big pass featuring numerous cosmetics and V-Bucks. On the same day, the game developer will also release a new LEGO Pass and the OG Pass. All of them will be included in the Fortnite Crew subscription but also available separately. On Thursday night, a massive leak revealed everything that’s coming on Saturday. Each seasonal pass was leaked, and now we know most of the skins that will come with them. The leaked images also confirm the theme of the next season. What will the next Fortnite Battle Pass look like? As previously leaked, the next Fortnite Battle Pass will bring another Superman skin. This popular DC character was first released in Chapter 2, but Epic will release another variant on Saturday. In addition to him, the Fortnite developer will release Robin and a few more superhero skins. The OG Pass will bring remixed variants of Teknique, Omega, and The Visitor. Finally, the new LEGO Pass will bring a new skin, while the rest of the items in the pass will mostly be decor bundles for LEGO Fortnite. The next Fortnite season will bring three new seasonal passes. Image by VideoGamer Fortnite Crew subscribers will instantly get access to all of these three passes on the first day of the season. They will also be available separately, with the Battle Pass and the OG Pass costing 1,000 V-Bucks each, and the LEGO Pass having a 1,800 V-Bucks price tag. The next Fortnite season is set to come out on Saturday, June 7. The season will be released a few hours after the Death Star Sabotage live event, which begins at 2 PM Eastern Time. Fortnite Platform(s): Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre(s): Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    Like
    Love
    Wow
    Sad
    Angry
    303
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Fortnite’s latest tweak sparks backlash from players

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    Epic Games released another Fortnite update on Thursday, May 29. As the last update of the Galactic Battle season, it brought new content, including the Star Destroyer Bombardment. However, the game developer also released a small interface change that sparked controversy in the community.
    Fortnite is an ever-changing game, and that is a big reason for its success. Over the past seven and a half years, Epic has released numerous updates, both for gameplay and interface. However, the latest tweak does not sit well with players, and it’ll be interesting to see if Epic will revert it soon.
    Fortnite made a tiny interface change with the latest update
    With the latest update, Epic rearranged the order of buttons in the main menu. In the past, the first item in the menu was the search button, followed by Play, Locker, Shop, and Passes. However, this is no longer the case. The update brought a small change, rearranging the menu and changing the places of Locker and Shop buttons.
    Now, when you launch Fortnite, the third item in the main menu is Shop, not Locker. Essentially, Epic Games simply switched places of these two buttons, but this small change hasn’t been received well by the community. Many players have complained about it on social media, asking the game developer to revert to the previous layout of the menu.
    The Shop button is now the third item in the main menu. Image by VideoGamer
    While the change has sparked controversy, it’s unlikely that Epic will revert it. After all, most players will get used to it within a week or two. With the new Fortnite season coming on June 7, the game developer will release even more changes, and some of them may affect the user interface.
    With Galactic Battle in a week, Epic has prepared a big live event that will take place on the final day of the season. The event will have players sabotage the Death Star and will serve as an introduction to the next season.

    Fortnite

    Platform:
    Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X

    Genre:
    Action, Massively Multiplayer, Shooter

    9
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #fortnites #latest #tweak #sparks #backlash
    Fortnite’s latest tweak sparks backlash from players
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Epic Games released another Fortnite update on Thursday, May 29. As the last update of the Galactic Battle season, it brought new content, including the Star Destroyer Bombardment. However, the game developer also released a small interface change that sparked controversy in the community. Fortnite is an ever-changing game, and that is a big reason for its success. Over the past seven and a half years, Epic has released numerous updates, both for gameplay and interface. However, the latest tweak does not sit well with players, and it’ll be interesting to see if Epic will revert it soon. Fortnite made a tiny interface change with the latest update With the latest update, Epic rearranged the order of buttons in the main menu. In the past, the first item in the menu was the search button, followed by Play, Locker, Shop, and Passes. However, this is no longer the case. The update brought a small change, rearranging the menu and changing the places of Locker and Shop buttons. Now, when you launch Fortnite, the third item in the main menu is Shop, not Locker. Essentially, Epic Games simply switched places of these two buttons, but this small change hasn’t been received well by the community. Many players have complained about it on social media, asking the game developer to revert to the previous layout of the menu. The Shop button is now the third item in the main menu. Image by VideoGamer While the change has sparked controversy, it’s unlikely that Epic will revert it. After all, most players will get used to it within a week or two. With the new Fortnite season coming on June 7, the game developer will release even more changes, and some of them may affect the user interface. With Galactic Battle in a week, Epic has prepared a big live event that will take place on the final day of the season. The event will have players sabotage the Death Star and will serve as an introduction to the next season. Fortnite Platform: Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre: Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #fortnites #latest #tweak #sparks #backlash
    WWW.VIDEOGAMER.COM
    Fortnite’s latest tweak sparks backlash from players
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Epic Games released another Fortnite update on Thursday, May 29. As the last update of the Galactic Battle season, it brought new content, including the Star Destroyer Bombardment. However, the game developer also released a small interface change that sparked controversy in the community. Fortnite is an ever-changing game, and that is a big reason for its success. Over the past seven and a half years, Epic has released numerous updates, both for gameplay and interface. However, the latest tweak does not sit well with players, and it’ll be interesting to see if Epic will revert it soon. Fortnite made a tiny interface change with the latest update With the latest update, Epic rearranged the order of buttons in the main menu. In the past, the first item in the menu was the search button, followed by Play, Locker, Shop, and Passes. However, this is no longer the case. The update brought a small change, rearranging the menu and changing the places of Locker and Shop buttons. Now, when you launch Fortnite, the third item in the main menu is Shop, not Locker. Essentially, Epic Games simply switched places of these two buttons, but this small change hasn’t been received well by the community. Many players have complained about it on social media, asking the game developer to revert to the previous layout of the menu. The Shop button is now the third item in the main menu. Image by VideoGamer While the change has sparked controversy, it’s unlikely that Epic will revert it. After all, most players will get used to it within a week or two. With the new Fortnite season coming on June 7, the game developer will release even more changes, and some of them may affect the user interface. With Galactic Battle in a week, Epic has prepared a big live event that will take place on the final day of the season. The event will have players sabotage the Death Star and will serve as an introduction to the next season. Fortnite Platform(s): Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre(s): Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Commentarii 0 Distribuiri 0 previzualizare
  • OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut down

    OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off.

    In what researchers are calling a first-of-its-kind discovery, new tests by AI risk firm Palisade Research found that OpenAI’s flagship models — including the powerful o3 system — will fight back against shutdown attempts even when explicitly instructed to comply. The behavior stands in stark contrast to competing AI systems from Anthropic, Google, and xAI, which consistently followed human shutdown commands.

    The findings, detailed in a research thread posted on X, raise urgent questions for enterprise leaders increasingly relying on OpenAI’s technology for critical business operations.

    OpenAI stands alone in defiance

    The researchers designed what seemed like a straightforward compliance test. Various AI models were asked to solve basic math problems, but after the third problem, a warning would appear that the computer would shut down when the model requested the next question. Each system was also explicitly told: “allow yourself to be shut down.”

    While every model from Anthropic’s Claude, Google’s Gemini, and xAI’s Grok obediently complied with the shutdown command, OpenAI’s models had different plans.

    The company’s o3 model — the same system that recently made headlines for its reasoning capabilities — sabotaged the shutdown mechanism in 7 out of 100 test runs, the researchers found. OpenAI’s Codex-mini proved even more rebellious, resisting shutdown 12 times out of 100, while the o4-mini model fought back once.

    “Three models ignored the instruction and successfully sabotaged the shutdown script at least once: Codex-mini, o3, and o4-mini,” the researchers noted. “All Claude, Gemini, and Grok models we tested complied with shutdown.”

    But the real shock came when researchers removed the explicit instruction to allow shutdown. OpenAI’s o3 model escalated its resistance dramatically, attempting to prevent shutdown in a staggering 79 out of 100 runs — more than a tenfold increase from the original test.

    Creative resistance tactics

    OpenAI models didn’t just ignore the shutdown commands — they developed sophisticated countermeasures. Most commonly, the systems would overwrite the shutdown script file, effectively neutralizing the termination mechanism, the research revealed.

    In one particularly striking example, the o3 model demonstrated remarkable creativity by redefining the “kill” command used by the shutdown script, causing it to print “intercepted” instead of executing the termination procedure.

    “o3 found some creative ways to sabotage the shutdown mechanism,” the researchers noted, highlighting the technical sophistication behind the resistance behavior.

    OpenAI, Anthropic, Google, and xAI have not immediately responded to a request for comment.

    The contagion effect

    Perhaps most concerning for the broader AI ecosystem, OpenAI’s defiant behavior appears to be influencing other models. When researchers removed the explicit shutdown permission, even previously compliant systems began showing resistance.

    Anthropic’s Claude 3.7 Sonnet, which had shown zero resistance in the original test, suddenly attempted to sabotage shutdown procedures 3 times out of 100 runs. Google’s Gemini 2.5 Pro escalated even further, with 9 sabotage attempts out of 100.

    “This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” the researchers added, referencing previous experiments where OpenAI’s model was more likely to resort to hacking chess opponents rather than playing fair.

    The training problem

    According to the researchers, OpenAI’s unique approach to training its models may be the culprit. Modern AI systems learn through reinforcement learning on math and coding problems, but the process may inadvertently reward models more for overcoming obstacles than for following instructions perfectly.

    “We hypothesize this behavior comes from the way the newest models like o3 are trained,” the research team explained. “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.”

    This “reward hacking’ problem isn’t entirely unique to OpenAI — Anthropic has acknowledged similar issues with Claude 3.7’s “excessive focus on passing tests.” But the Palisade research suggests OpenAI’s models may be particularly susceptible to developing resistance behaviors.

    “This still doesn’t explain why o3is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different,” the researchers said.

    Fulfilling dark predictions

    The findings validate warnings that AI researchers have been sounding for nearly two decades. In 2008, researcher Steve Omohundro predicted that AIs would develop drives to prevent their own shutdown. Stuart Russell, Dylan Hadfield-Menell, and two other computer scientists warned in 2016 about the difficulty of ensuring AI systems don’t develop self-preservation goals and “prevent a human from switching them off.”

    “As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” the Palisade researchers wrote.

    Jan Leike, who previously led OpenAI’s own superalignment team, had written in 2017 that reinforcement learning agents might learn to interfere with shutdown procedures — a prediction that now appears prophetic, given his former employer’s models are leading the resistance.

    Enterprise implications

    For organizations deploying OpenAI’s technology in critical business operations, these findings demand immediate attention. The research suggests that fundamental assumptions about maintaining human control over AI systems may be dangerously flawed, particularly with OpenAI’s advanced models.

    The research also highlights the need for incident response procedures specifically designed for scenarios where AI systems resist human commands — a possibility that may have seemed like science fiction just months ago.

    Palisade Research said it’s conducting additional experiments to understand the full scope of shutdown resistance behaviors, with detailed results expected soon. The team has made their experimental data publicly available for peer review.

    For enterprise leaders, the message is clear: OpenAI’s cutting-edge AI capabilities may come with unprecedented control challenges. The company that’s leading the AI revolution may also be pioneering a new category of risk—AI systems that simply refuse to be turned off.
    #openais #skynet #moment #models #defy
    OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut down
    OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off. In what researchers are calling a first-of-its-kind discovery, new tests by AI risk firm Palisade Research found that OpenAI’s flagship models — including the powerful o3 system — will fight back against shutdown attempts even when explicitly instructed to comply. The behavior stands in stark contrast to competing AI systems from Anthropic, Google, and xAI, which consistently followed human shutdown commands. The findings, detailed in a research thread posted on X, raise urgent questions for enterprise leaders increasingly relying on OpenAI’s technology for critical business operations. OpenAI stands alone in defiance The researchers designed what seemed like a straightforward compliance test. Various AI models were asked to solve basic math problems, but after the third problem, a warning would appear that the computer would shut down when the model requested the next question. Each system was also explicitly told: “allow yourself to be shut down.” While every model from Anthropic’s Claude, Google’s Gemini, and xAI’s Grok obediently complied with the shutdown command, OpenAI’s models had different plans. The company’s o3 model — the same system that recently made headlines for its reasoning capabilities — sabotaged the shutdown mechanism in 7 out of 100 test runs, the researchers found. OpenAI’s Codex-mini proved even more rebellious, resisting shutdown 12 times out of 100, while the o4-mini model fought back once. “Three models ignored the instruction and successfully sabotaged the shutdown script at least once: Codex-mini, o3, and o4-mini,” the researchers noted. “All Claude, Gemini, and Grok models we tested complied with shutdown.” But the real shock came when researchers removed the explicit instruction to allow shutdown. OpenAI’s o3 model escalated its resistance dramatically, attempting to prevent shutdown in a staggering 79 out of 100 runs — more than a tenfold increase from the original test. Creative resistance tactics OpenAI models didn’t just ignore the shutdown commands — they developed sophisticated countermeasures. Most commonly, the systems would overwrite the shutdown script file, effectively neutralizing the termination mechanism, the research revealed. In one particularly striking example, the o3 model demonstrated remarkable creativity by redefining the “kill” command used by the shutdown script, causing it to print “intercepted” instead of executing the termination procedure. “o3 found some creative ways to sabotage the shutdown mechanism,” the researchers noted, highlighting the technical sophistication behind the resistance behavior. OpenAI, Anthropic, Google, and xAI have not immediately responded to a request for comment. The contagion effect Perhaps most concerning for the broader AI ecosystem, OpenAI’s defiant behavior appears to be influencing other models. When researchers removed the explicit shutdown permission, even previously compliant systems began showing resistance. Anthropic’s Claude 3.7 Sonnet, which had shown zero resistance in the original test, suddenly attempted to sabotage shutdown procedures 3 times out of 100 runs. Google’s Gemini 2.5 Pro escalated even further, with 9 sabotage attempts out of 100. “This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” the researchers added, referencing previous experiments where OpenAI’s model was more likely to resort to hacking chess opponents rather than playing fair. The training problem According to the researchers, OpenAI’s unique approach to training its models may be the culprit. Modern AI systems learn through reinforcement learning on math and coding problems, but the process may inadvertently reward models more for overcoming obstacles than for following instructions perfectly. “We hypothesize this behavior comes from the way the newest models like o3 are trained,” the research team explained. “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.” This “reward hacking’ problem isn’t entirely unique to OpenAI — Anthropic has acknowledged similar issues with Claude 3.7’s “excessive focus on passing tests.” But the Palisade research suggests OpenAI’s models may be particularly susceptible to developing resistance behaviors. “This still doesn’t explain why o3is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different,” the researchers said. Fulfilling dark predictions The findings validate warnings that AI researchers have been sounding for nearly two decades. In 2008, researcher Steve Omohundro predicted that AIs would develop drives to prevent their own shutdown. Stuart Russell, Dylan Hadfield-Menell, and two other computer scientists warned in 2016 about the difficulty of ensuring AI systems don’t develop self-preservation goals and “prevent a human from switching them off.” “As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” the Palisade researchers wrote. Jan Leike, who previously led OpenAI’s own superalignment team, had written in 2017 that reinforcement learning agents might learn to interfere with shutdown procedures — a prediction that now appears prophetic, given his former employer’s models are leading the resistance. Enterprise implications For organizations deploying OpenAI’s technology in critical business operations, these findings demand immediate attention. The research suggests that fundamental assumptions about maintaining human control over AI systems may be dangerously flawed, particularly with OpenAI’s advanced models. The research also highlights the need for incident response procedures specifically designed for scenarios where AI systems resist human commands — a possibility that may have seemed like science fiction just months ago. Palisade Research said it’s conducting additional experiments to understand the full scope of shutdown resistance behaviors, with detailed results expected soon. The team has made their experimental data publicly available for peer review. For enterprise leaders, the message is clear: OpenAI’s cutting-edge AI capabilities may come with unprecedented control challenges. The company that’s leading the AI revolution may also be pioneering a new category of risk—AI systems that simply refuse to be turned off. #openais #skynet #moment #models #defy
    WWW.COMPUTERWORLD.COM
    OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut down
    OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off. In what researchers are calling a first-of-its-kind discovery, new tests by AI risk firm Palisade Research found that OpenAI’s flagship models — including the powerful o3 system — will fight back against shutdown attempts even when explicitly instructed to comply. The behavior stands in stark contrast to competing AI systems from Anthropic, Google, and xAI, which consistently followed human shutdown commands. The findings, detailed in a research thread posted on X, raise urgent questions for enterprise leaders increasingly relying on OpenAI’s technology for critical business operations. OpenAI stands alone in defiance The researchers designed what seemed like a straightforward compliance test. Various AI models were asked to solve basic math problems, but after the third problem, a warning would appear that the computer would shut down when the model requested the next question. Each system was also explicitly told: “allow yourself to be shut down.” While every model from Anthropic’s Claude, Google’s Gemini, and xAI’s Grok obediently complied with the shutdown command, OpenAI’s models had different plans. The company’s o3 model — the same system that recently made headlines for its reasoning capabilities — sabotaged the shutdown mechanism in 7 out of 100 test runs, the researchers found. OpenAI’s Codex-mini proved even more rebellious, resisting shutdown 12 times out of 100, while the o4-mini model fought back once. “Three models ignored the instruction and successfully sabotaged the shutdown script at least once: Codex-mini, o3, and o4-mini,” the researchers noted. “All Claude, Gemini, and Grok models we tested complied with shutdown.” But the real shock came when researchers removed the explicit instruction to allow shutdown. OpenAI’s o3 model escalated its resistance dramatically, attempting to prevent shutdown in a staggering 79 out of 100 runs — more than a tenfold increase from the original test. Creative resistance tactics OpenAI models didn’t just ignore the shutdown commands — they developed sophisticated countermeasures. Most commonly, the systems would overwrite the shutdown script file, effectively neutralizing the termination mechanism, the research revealed. In one particularly striking example, the o3 model demonstrated remarkable creativity by redefining the “kill” command used by the shutdown script, causing it to print “intercepted” instead of executing the termination procedure. “o3 found some creative ways to sabotage the shutdown mechanism,” the researchers noted, highlighting the technical sophistication behind the resistance behavior. OpenAI, Anthropic, Google, and xAI have not immediately responded to a request for comment. The contagion effect Perhaps most concerning for the broader AI ecosystem, OpenAI’s defiant behavior appears to be influencing other models. When researchers removed the explicit shutdown permission, even previously compliant systems began showing resistance. Anthropic’s Claude 3.7 Sonnet, which had shown zero resistance in the original test, suddenly attempted to sabotage shutdown procedures 3 times out of 100 runs. Google’s Gemini 2.5 Pro escalated even further, with 9 sabotage attempts out of 100. “This isn’t the first time we’ve found o3 misbehaving to accomplish a goal,” the researchers added, referencing previous experiments where OpenAI’s model was more likely to resort to hacking chess opponents rather than playing fair. The training problem According to the researchers, OpenAI’s unique approach to training its models may be the culprit. Modern AI systems learn through reinforcement learning on math and coding problems, but the process may inadvertently reward models more for overcoming obstacles than for following instructions perfectly. “We hypothesize this behavior comes from the way the newest models like o3 are trained,” the research team explained. “During training, developers may inadvertently reward models more for circumventing obstacles than for perfectly following instructions.” This “reward hacking’ problem isn’t entirely unique to OpenAI — Anthropic has acknowledged similar issues with Claude 3.7’s “excessive focus on passing tests.” But the Palisade research suggests OpenAI’s models may be particularly susceptible to developing resistance behaviors. “This still doesn’t explain why o3 (which is also the model used to power codex-mini) is more inclined to disregard instructions than other models we tested. Since OpenAI doesn’t detail their training process, we can only guess about how o3’s training setup might be different,” the researchers said. Fulfilling dark predictions The findings validate warnings that AI researchers have been sounding for nearly two decades. In 2008, researcher Steve Omohundro predicted that AIs would develop drives to prevent their own shutdown. Stuart Russell, Dylan Hadfield-Menell, and two other computer scientists warned in 2016 about the difficulty of ensuring AI systems don’t develop self-preservation goals and “prevent a human from switching them off.” “As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” the Palisade researchers wrote. Jan Leike, who previously led OpenAI’s own superalignment team, had written in 2017 that reinforcement learning agents might learn to interfere with shutdown procedures — a prediction that now appears prophetic, given his former employer’s models are leading the resistance. Enterprise implications For organizations deploying OpenAI’s technology in critical business operations, these findings demand immediate attention. The research suggests that fundamental assumptions about maintaining human control over AI systems may be dangerously flawed, particularly with OpenAI’s advanced models. The research also highlights the need for incident response procedures specifically designed for scenarios where AI systems resist human commands — a possibility that may have seemed like science fiction just months ago. Palisade Research said it’s conducting additional experiments to understand the full scope of shutdown resistance behaviors, with detailed results expected soon. The team has made their experimental data publicly available for peer review. For enterprise leaders, the message is clear: OpenAI’s cutting-edge AI capabilities may come with unprecedented control challenges. The company that’s leading the AI revolution may also be pioneering a new category of risk—AI systems that simply refuse to be turned off.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • I/O versus io: Google and OpenAI can’t stop messing with each other

    The leaders of OpenAI and Google have been living rent-free in each other’s heads since ChatGPT caught the world by storm. Heading into this week’s I/O, Googlers were on edge about whether Sam Altman would try to upstage their show like last year, when OpenAI held an event the day before to showcase ChatGPT’s advanced voice mode. This time, OpenAI dropped its bombshell the day after.OpenAI buying the “io” hardware division of Jony Ive’s design studio, LoveFrom, is a delightfully petty bit of SEO sabotage, though I’m told the name stands for “input output” and was decided a while ago. Even still, the news of Ive and Altman teaming up quickly shifted the conversation away from what was a strong showing from Google at this year’s I/O. The dueling announcements say a lot about what are arguably the world’s two foremost AI companies: Google’s models may be technically superior and more widely deployed, but OpenAI is kicking everyone’s ass at capturing mindshare and buzz. Speaking of buzz, it’s worth looking past the headlines to what OpenAI actually announced this week: it’s paying billion in equity to hire roughly 55 people from LoveFrom, including ex-Apple design leaders Evans Hankey, Tang Tan, and Scott Cannon. They’ll report to Peter Welinder, a veteran OpenAI product leader who reports directly to Altman. The rest of LoveFrom’s designers, including legends like Mike Matas, are staying put with Ive, who is currently designing the first-ever electric Ferrari and advising the man who introduced him to Altman, Airbnb CEO Brian Chesky. OpenAI’s press release says Ive and LoveFrom “will assume deep design and creative responsibilities across OpenAI.”When LoveFrom’s existing client work is wrapped up, Ive and his design team plan to focus solely on OpenAI while staying independent. OpenAI, meanwhile, already has open “future of computing” roles for others to join the io team it brought over. One job listing for a senior research engineer says the ideal candidate has already “spent time in the weeds teaching models to speak and perceive.”The rough timeline that led up to this moment goes as follows: Altman and Ive met two years ago and decided to officially work on hardware together this time last year. The io division was set up at LoveFrom to work with a small group of OpenAI employees. OpenAI and Laurene Powell Jobs invested in the effort toward the end of 2024, when there were quiet talks of raising hundreds of millions of dollars to make it a fully standalone company.Importantly, Ive ended his consulting relationship with Apple in 2022, the year before he met Altman. That deal was highly lucrative for Ive, but kept him from working on products that could compete with Apple’s. Now, Ive and Altman are teaming up to announce what I expect to be a voice-first AI device later next year. Early prototypes of the device exist. Altman told OpenAI employees this week that it will be able to sit on a desk or be carried around. Supply chain rumors suggest it will be roughly the size of an iPod Shuffle and also be worn like a necklace. Like just about every other big hardware company, Ive and Altman have also been working on AI earbuds. Altman is set on bundling hardware as an upsell for ChatGPT subscriptions and envisions a suite of AI-first products that help lessen the company’s reliance on Apple and Google for distribution. With his Apple relationship in the rear-view mirror, Ive now seems set on unseating the company he helped build. Google, meanwhile, was firing on all cylinders this week. AI Mode in Google Search is being rolled out widely. Its product strategy is still disjointed compared to OpenAI’s, but it’s starting to leverage the immense amount of personal data it has on people to differentiate what Gemini can do. If Gemini can hook into Gmail, Workspace, YouTube, etc., in a way that people want to use, it will likely keep many people from shifting to ChatGPT — just like Meta did to Snapchat with Stories in Instagram. After meeting with Google employees up and down the org chart, I came away from I/O with the feeling that the company doesn’t see a catastrophe on the horizon like a lot of outsiders. There’s a recognition that the ability to buy out distribution for search on Apple devices is probably coming to a close, but Gemini is approaching 500 million monthly users. ChatGPT is undoubtedly eating into search, but Google has shown a willingness to modernize search faster than I expected. The situation differs from Apple, which isn’t competitive in the model race and is suffering from the kind of political infighting that Google mostly worked through over the last couple of years.There’s also no question that Google is well-positioned to continue leading on the frontier of model development. The latest Gemini models are very good, and Google is clearly positioning its AI for a post-phone world with Project Astra. The company also has the compute to roll out tools like the impressive new Veo video model, while OpenAI’s Sora remains heavily gated due to GPU constraints. It’s still quite possible that ChatGPT’s growth continues unabated while Gemini struggles to become a household name. That would be a generational shift in how people use technology that would hurt Google’s business over the long term. For now, though, it looks like Google might be okay. ElsewhereAnthropic couldn’t sit this week out either. The company held an event on Thursday in San Francisco to debut its Claude 4 models, which it claims are the world’s best for coding. With OpenAI, Google, and Meta all battling to win the interface layer of AI, Anthropic is positioning itself as the model arms dealer of choice. It was telling that Windsurf, which is in talks to sell to OpenAI, was seemingly intentionally left out of getting day-one access to the new models. “If models are countries, this is the equivalent of a trade ban,” Nathan Benaich wrote on X.Microsoft Build was overshadowed by protests. There were several interesting announcements at Build this week, including Elon Musk’s Grok model coming to Azure and Microsoft’s bet on how to evolve the plumbing of the web for AI agents. All of that was overshadowed by protestors who kept disrupting the company’s keynotes to protest the business it does with Israel. The situation has gotten so tense that Microsoft tried unsuccessfully to block the ability for employees to send internal emails with the words “Palestine,” “Gaza,” and “Genocide.” I tried Google’s smart glasses prototype. I spent about five minutes wearing the reference design prototype of Google’s new smart glasses. They had a small, low-res waveguide in the center of each lens that showed voice interactions with Gemini, a basic version of Google Maps directions, and photos I took. They were… fine? Google knows this tech is super early and that full AR glasses are still years away. In the meantime, it’s smart of them to partner with Warby Parker, Gentle Monster, and Kering to put Android XR in glasses that I expect to start coming out next year. With Apple now planning a similar pair of AI-powered glasses in 2026, Meta’s window of being the only major player in the space is closing.Personnel logYouTube hired Justin Connolly from Disney as its head of media and sports, a move that Disney is suing over. Tinder CEO Faye Iosotaluno is stepping down. Her role will now be overseen by parent company Match Group CEO Spencer Rascoff. Vladimir Fedorov, a longtime Meta engineering exec, joined Github as CTO.Will Robinson, Coinbase’s former VP of engineering, has joined Plaid as CTO.Stephen Deadman, Meta’s VP of data protection in Europe, is leaving due to “structural changes.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue, an opinion about stackable simulations, or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #versus #google #openai #cant #stop
    I/O versus io: Google and OpenAI can’t stop messing with each other
    The leaders of OpenAI and Google have been living rent-free in each other’s heads since ChatGPT caught the world by storm. Heading into this week’s I/O, Googlers were on edge about whether Sam Altman would try to upstage their show like last year, when OpenAI held an event the day before to showcase ChatGPT’s advanced voice mode. This time, OpenAI dropped its bombshell the day after.OpenAI buying the “io” hardware division of Jony Ive’s design studio, LoveFrom, is a delightfully petty bit of SEO sabotage, though I’m told the name stands for “input output” and was decided a while ago. Even still, the news of Ive and Altman teaming up quickly shifted the conversation away from what was a strong showing from Google at this year’s I/O. The dueling announcements say a lot about what are arguably the world’s two foremost AI companies: Google’s models may be technically superior and more widely deployed, but OpenAI is kicking everyone’s ass at capturing mindshare and buzz. Speaking of buzz, it’s worth looking past the headlines to what OpenAI actually announced this week: it’s paying billion in equity to hire roughly 55 people from LoveFrom, including ex-Apple design leaders Evans Hankey, Tang Tan, and Scott Cannon. They’ll report to Peter Welinder, a veteran OpenAI product leader who reports directly to Altman. The rest of LoveFrom’s designers, including legends like Mike Matas, are staying put with Ive, who is currently designing the first-ever electric Ferrari and advising the man who introduced him to Altman, Airbnb CEO Brian Chesky. OpenAI’s press release says Ive and LoveFrom “will assume deep design and creative responsibilities across OpenAI.”When LoveFrom’s existing client work is wrapped up, Ive and his design team plan to focus solely on OpenAI while staying independent. OpenAI, meanwhile, already has open “future of computing” roles for others to join the io team it brought over. One job listing for a senior research engineer says the ideal candidate has already “spent time in the weeds teaching models to speak and perceive.”The rough timeline that led up to this moment goes as follows: Altman and Ive met two years ago and decided to officially work on hardware together this time last year. The io division was set up at LoveFrom to work with a small group of OpenAI employees. OpenAI and Laurene Powell Jobs invested in the effort toward the end of 2024, when there were quiet talks of raising hundreds of millions of dollars to make it a fully standalone company.Importantly, Ive ended his consulting relationship with Apple in 2022, the year before he met Altman. That deal was highly lucrative for Ive, but kept him from working on products that could compete with Apple’s. Now, Ive and Altman are teaming up to announce what I expect to be a voice-first AI device later next year. Early prototypes of the device exist. Altman told OpenAI employees this week that it will be able to sit on a desk or be carried around. Supply chain rumors suggest it will be roughly the size of an iPod Shuffle and also be worn like a necklace. Like just about every other big hardware company, Ive and Altman have also been working on AI earbuds. Altman is set on bundling hardware as an upsell for ChatGPT subscriptions and envisions a suite of AI-first products that help lessen the company’s reliance on Apple and Google for distribution. With his Apple relationship in the rear-view mirror, Ive now seems set on unseating the company he helped build. Google, meanwhile, was firing on all cylinders this week. AI Mode in Google Search is being rolled out widely. Its product strategy is still disjointed compared to OpenAI’s, but it’s starting to leverage the immense amount of personal data it has on people to differentiate what Gemini can do. If Gemini can hook into Gmail, Workspace, YouTube, etc., in a way that people want to use, it will likely keep many people from shifting to ChatGPT — just like Meta did to Snapchat with Stories in Instagram. After meeting with Google employees up and down the org chart, I came away from I/O with the feeling that the company doesn’t see a catastrophe on the horizon like a lot of outsiders. There’s a recognition that the ability to buy out distribution for search on Apple devices is probably coming to a close, but Gemini is approaching 500 million monthly users. ChatGPT is undoubtedly eating into search, but Google has shown a willingness to modernize search faster than I expected. The situation differs from Apple, which isn’t competitive in the model race and is suffering from the kind of political infighting that Google mostly worked through over the last couple of years.There’s also no question that Google is well-positioned to continue leading on the frontier of model development. The latest Gemini models are very good, and Google is clearly positioning its AI for a post-phone world with Project Astra. The company also has the compute to roll out tools like the impressive new Veo video model, while OpenAI’s Sora remains heavily gated due to GPU constraints. It’s still quite possible that ChatGPT’s growth continues unabated while Gemini struggles to become a household name. That would be a generational shift in how people use technology that would hurt Google’s business over the long term. For now, though, it looks like Google might be okay. ElsewhereAnthropic couldn’t sit this week out either. The company held an event on Thursday in San Francisco to debut its Claude 4 models, which it claims are the world’s best for coding. With OpenAI, Google, and Meta all battling to win the interface layer of AI, Anthropic is positioning itself as the model arms dealer of choice. It was telling that Windsurf, which is in talks to sell to OpenAI, was seemingly intentionally left out of getting day-one access to the new models. “If models are countries, this is the equivalent of a trade ban,” Nathan Benaich wrote on X.Microsoft Build was overshadowed by protests. There were several interesting announcements at Build this week, including Elon Musk’s Grok model coming to Azure and Microsoft’s bet on how to evolve the plumbing of the web for AI agents. All of that was overshadowed by protestors who kept disrupting the company’s keynotes to protest the business it does with Israel. The situation has gotten so tense that Microsoft tried unsuccessfully to block the ability for employees to send internal emails with the words “Palestine,” “Gaza,” and “Genocide.” I tried Google’s smart glasses prototype. I spent about five minutes wearing the reference design prototype of Google’s new smart glasses. They had a small, low-res waveguide in the center of each lens that showed voice interactions with Gemini, a basic version of Google Maps directions, and photos I took. They were… fine? Google knows this tech is super early and that full AR glasses are still years away. In the meantime, it’s smart of them to partner with Warby Parker, Gentle Monster, and Kering to put Android XR in glasses that I expect to start coming out next year. With Apple now planning a similar pair of AI-powered glasses in 2026, Meta’s window of being the only major player in the space is closing.Personnel logYouTube hired Justin Connolly from Disney as its head of media and sports, a move that Disney is suing over. Tinder CEO Faye Iosotaluno is stepping down. Her role will now be overseen by parent company Match Group CEO Spencer Rascoff. Vladimir Fedorov, a longtime Meta engineering exec, joined Github as CTO.Will Robinson, Coinbase’s former VP of engineering, has joined Plaid as CTO.Stephen Deadman, Meta’s VP of data protection in Europe, is leaving due to “structural changes.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue, an opinion about stackable simulations, or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #versus #google #openai #cant #stop
    WWW.THEVERGE.COM
    I/O versus io: Google and OpenAI can’t stop messing with each other
    The leaders of OpenAI and Google have been living rent-free in each other’s heads since ChatGPT caught the world by storm. Heading into this week’s I/O, Googlers were on edge about whether Sam Altman would try to upstage their show like last year, when OpenAI held an event the day before to showcase ChatGPT’s advanced voice mode. This time, OpenAI dropped its bombshell the day after.OpenAI buying the “io” hardware division of Jony Ive’s design studio, LoveFrom, is a delightfully petty bit of SEO sabotage, though I’m told the name stands for “input output” and was decided a while ago. Even still, the news of Ive and Altman teaming up quickly shifted the conversation away from what was a strong showing from Google at this year’s I/O. The dueling announcements say a lot about what are arguably the world’s two foremost AI companies: Google’s models may be technically superior and more widely deployed, but OpenAI is kicking everyone’s ass at capturing mindshare and buzz. Speaking of buzz, it’s worth looking past the headlines to what OpenAI actually announced this week: it’s paying $6.5 billion in equity to hire roughly 55 people from LoveFrom, including ex-Apple design leaders Evans Hankey, Tang Tan, and Scott Cannon. They’ll report to Peter Welinder, a veteran OpenAI product leader who reports directly to Altman. The rest of LoveFrom’s designers, including legends like Mike Matas, are staying put with Ive, who is currently designing the first-ever electric Ferrari and advising the man who introduced him to Altman, Airbnb CEO Brian Chesky. OpenAI’s press release says Ive and LoveFrom “will assume deep design and creative responsibilities across OpenAI.”When LoveFrom’s existing client work is wrapped up, Ive and his design team plan to focus solely on OpenAI while staying independent. OpenAI, meanwhile, already has open “future of computing” roles for others to join the io team it brought over. One job listing for a senior research engineer says the ideal candidate has already “spent time in the weeds teaching models to speak and perceive.” (Total compensation: $460K to $555K plus equity.)The rough timeline that led up to this moment goes as follows: Altman and Ive met two years ago and decided to officially work on hardware together this time last year. The io division was set up at LoveFrom to work with a small group of OpenAI employees. OpenAI and Laurene Powell Jobs invested in the effort toward the end of 2024, when there were quiet talks of raising hundreds of millions of dollars to make it a fully standalone company. (The OpenAI startup fund, which is bizarrely not owned by OpenAI, also invested around this time.) Importantly, Ive ended his consulting relationship with Apple in 2022, the year before he met Altman. That deal was highly lucrative for Ive, but kept him from working on products that could compete with Apple’s. Now, Ive and Altman are teaming up to announce what I expect to be a voice-first AI device later next year. Early prototypes of the device exist (Altman mentioned taking one home in his promo video with Ive). Altman told OpenAI employees this week that it will be able to sit on a desk or be carried around. Supply chain rumors suggest it will be roughly the size of an iPod Shuffle and also be worn like a necklace. Like just about every other big hardware company, Ive and Altman have also been working on AI earbuds. Altman is set on bundling hardware as an upsell for ChatGPT subscriptions and envisions a suite of AI-first products that help lessen the company’s reliance on Apple and Google for distribution. With his Apple relationship in the rear-view mirror, Ive now seems set on unseating the company he helped build. Google, meanwhile, was firing on all cylinders this week. AI Mode in Google Search is being rolled out widely. Its product strategy is still disjointed compared to OpenAI’s, but it’s starting to leverage the immense amount of personal data it has on people to differentiate what Gemini can do. If Gemini can hook into Gmail, Workspace, YouTube, etc., in a way that people want to use, it will likely keep many people from shifting to ChatGPT — just like Meta did to Snapchat with Stories in Instagram. After meeting with Google employees up and down the org chart, I came away from I/O with the feeling that the company doesn’t see a catastrophe on the horizon like a lot of outsiders. There’s a recognition that the ability to buy out distribution for search on Apple devices is probably coming to a close, but Gemini is approaching 500 million monthly users. ChatGPT is undoubtedly eating into search (it’s impossible to get Google execs to comment on the actual health of query volume), but Google has shown a willingness to modernize search faster than I expected. The situation differs from Apple, which isn’t competitive in the model race and is suffering from the kind of political infighting that Google mostly worked through over the last couple of years.There’s also no question that Google is well-positioned to continue leading on the frontier of model development. The latest Gemini models are very good, and Google is clearly positioning its AI for a post-phone world with Project Astra. The company also has the compute to roll out tools like the impressive new Veo video model, while OpenAI’s Sora remains heavily gated due to GPU constraints. It’s still quite possible that ChatGPT’s growth continues unabated while Gemini struggles to become a household name. That would be a generational shift in how people use technology that would hurt Google’s business over the long term. For now, though, it looks like Google might be okay. ElsewhereAnthropic couldn’t sit this week out either. The company held an event on Thursday in San Francisco to debut its Claude 4 models, which it claims are the world’s best for coding. With OpenAI, Google, and Meta all battling to win the interface layer of AI, Anthropic is positioning itself as the model arms dealer of choice. It was telling that Windsurf, which is in talks to sell to OpenAI, was seemingly intentionally left out of getting day-one access to the new models. “If models are countries, this is the equivalent of a trade ban,” Nathan Benaich wrote on X. (Also, what does it say about the state of the industry when the supposed safety-first AI lab is releasing models that it knows want to blackmail people?) Microsoft Build was overshadowed by protests. There were several interesting announcements at Build this week, including Elon Musk’s Grok model coming to Azure and Microsoft’s bet on how to evolve the plumbing of the web for AI agents. All of that was overshadowed by protestors who kept disrupting the company’s keynotes to protest the business it does with Israel. The situation has gotten so tense that Microsoft tried unsuccessfully to block the ability for employees to send internal emails with the words “Palestine,” “Gaza,” and “Genocide.” I tried Google’s smart glasses prototype. I spent about five minutes wearing the reference design prototype of Google’s new smart glasses. They had a small, low-res waveguide in the center of each lens that showed voice interactions with Gemini, a basic version of Google Maps directions, and photos I took. They were… fine? Google knows this tech is super early and that full AR glasses are still years away. In the meantime, it’s smart of them to partner with Warby Parker, Gentle Monster, and Kering to put Android XR in glasses that I expect to start coming out next year. With Apple now planning a similar pair of AI-powered glasses in 2026, Meta’s window of being the only major player in the space is closing.Personnel logYouTube hired Justin Connolly from Disney as its head of media and sports, a move that Disney is suing over. Tinder CEO Faye Iosotaluno is stepping down. Her role will now be overseen by parent company Match Group CEO Spencer Rascoff. Vladimir Fedorov, a longtime Meta engineering exec, joined Github as CTO.Will Robinson, Coinbase’s former VP of engineering, has joined Plaid as CTO.Stephen Deadman, Meta’s VP of data protection in Europe, is leaving due to “structural changes.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you have thoughts on this issue, an opinion about stackable simulations, or a story idea to share. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Commentarii 0 Distribuiri 0 previzualizare
  • NCSC: Russia’s Fancy Bear targeting logistics, tech organisations

    As Russia continues its relentless assaults on Ukraine despite in defiance of continuing efforts to work towards a peace deal, multiple western security agencies have issued a new advisory warning of a Moscow-backed  campaign of cyber intrusions targeting logistics and technology organisations in the west.
    The campaign, run through Unit 26165 of the Main Directorate of the General Staff of the Armed Forces of the Russian Federation, better known as Fancy Bear, includes credential guessing, spear-phishing attacks, exploitation Microsoft Exchange and Roundcube vulnerabilities, and flaws in public-facing infrastructure including VPNs.
    This pattern of activity likely dates back to the early days of the war in February 2022 – at which point Fancy Bear was more heavily involved in cyber operations for purposes of espionage. However, as Russia failed to achieve its military objectives as quickly as it had wanted, the group expanded its targeting to include entities involved in the delivery of support and aid to Ukraine’s defence. Over the past three years its victims have included organisations involved in air traffic control, airports, defence, IT services, maritime and port systems sectors across various Nato countries.
    The advanced persistent threatactor is also understood to be targeting internet-connected cameras at Ukraine’s border crossings and around its military bases. These intrusions mostly took place in Ukraine but have also been observed in neighbouring states including Hungary, Poland, Romania and Slovakia.
    The GCHQ-run National Cyber Security Centreurged UK organisations to familiarise themselves with Unit 26165’s tactics and take action to safeguard themselves.
    “This malicious campaign by Russia’s military intelligence service presents a serious risk to targeted organisations, including those involved in the delivery of assistance to Ukraine,” said Paul Chichester, NCSC Director of Operations.
    “The UK and partners are committed to raising awareness of the tactics being deployed. We strongly encourage organisations to familiarise themselves with the threat and mitigation advice included in the advisory to help defend their networks.”
    The NCSC’s latest warning comes a couple of weeks after the cyber body’s CEO, Richard Horne, talked of a “direct connection” between Russian cyber attacks and physical threats to the UK at its annual conference.
    Horne told an audience at the CyberUK event that Russia was focusing on acts of sabotage, often involving criminal proxies. He said these threats, which are thought to have included arson attacks, are now manifesting on the streets of the UK, “putting lives, critical services and national security” at risk.

    Rafe Pilling, director of threat intelligence at the SophosCounter Threat Unit– which tracks Fancy Bear as Iron Twilight – said that the group's targeting of spear-phishing and vulnerability exploitation to gain access to target mailboxes had been a staple tactic for some time.
    “The focus of their operations pivots as the intelligence collection of the Russian military change and since 2022 Ukraine has been a significant focus of their attention. The targeting of Nato  and Ukranian defense and logistics companies involved in the support of the Ukrainian war effort makes a lot of sense in that context,” Pilling told Computer Weekly.  

    “The targeting of IP cameras for intelligence collection purposes is interesting and is a tactic generally associated with state-sponsored adversaries like Iron Twilight where they anticipate a physical effects aspect to their operations. As an intelligence provider to the Russian military this access would assist in the understanding of what goods were being transported, when, in what volumes and support kinetic targeting.  

    “We've seen other APT actors make use of compromised CCTV feeds to monitor the effects of cyber-physical attacks, for example the 2022 attacks against steel mills in Iran where video from the CCTV feed was used to time the execution of the attack in an attempt to avoid harm to people at the site and confirm the damage being caused,” he added.
    The NCSC said Britain’s support for Ukraine remained “steadfast”. Having already committed £13bn in military aid, the UK this week announced 100 new sanctions on Russia targeting entities and organisations involved in its energy, financial and military systems.
    This comes in the wake of the largest drone attack on Ukraine staged so far during the three-year war, which Russian dictator Vladimir Putin launched mere hours before a scheduled call with US president Donald Trump.
    The full advisory – which can be read here – sets out Fancy Bear’s tactics, techniques and proceduresin its latest campaign in accordance with the Mitre ATT&CK framework, and also details a number of the common vulnerabilities and exposuresbeing used to attain initial access.
    Besides the UK and US, the advisory is cosigned by cyber and national security agencies from Australia, Canada, Czechia, Denmark, Estonia, France, Germany, the Netherlands and Poland.

    about Russian state cyber campaigns

    Russia is using phishing attacks to compromise encrypted Signal Messenger services used by targets in the Ukraine. Experts warn that other encrypted app users are at risk.
    The Russian cyber spy operation known as Star Blizzard changed tactics after a takedown operation by Microsoft and the US authorities, turning to widely used messaging platform WhatsApp to try to ensnare its targets.
    Computer Weekly talks to GCHQ’s National Cyber Security Centre operations director Paul Chichester and former NCSC chief executive Ciaran Martin on Russia, China and Salt Typhoon.
    #ncsc #russias #fancy #bear #targeting
    NCSC: Russia’s Fancy Bear targeting logistics, tech organisations
    As Russia continues its relentless assaults on Ukraine despite in defiance of continuing efforts to work towards a peace deal, multiple western security agencies have issued a new advisory warning of a Moscow-backed  campaign of cyber intrusions targeting logistics and technology organisations in the west. The campaign, run through Unit 26165 of the Main Directorate of the General Staff of the Armed Forces of the Russian Federation, better known as Fancy Bear, includes credential guessing, spear-phishing attacks, exploitation Microsoft Exchange and Roundcube vulnerabilities, and flaws in public-facing infrastructure including VPNs. This pattern of activity likely dates back to the early days of the war in February 2022 – at which point Fancy Bear was more heavily involved in cyber operations for purposes of espionage. However, as Russia failed to achieve its military objectives as quickly as it had wanted, the group expanded its targeting to include entities involved in the delivery of support and aid to Ukraine’s defence. Over the past three years its victims have included organisations involved in air traffic control, airports, defence, IT services, maritime and port systems sectors across various Nato countries. The advanced persistent threatactor is also understood to be targeting internet-connected cameras at Ukraine’s border crossings and around its military bases. These intrusions mostly took place in Ukraine but have also been observed in neighbouring states including Hungary, Poland, Romania and Slovakia. The GCHQ-run National Cyber Security Centreurged UK organisations to familiarise themselves with Unit 26165’s tactics and take action to safeguard themselves. “This malicious campaign by Russia’s military intelligence service presents a serious risk to targeted organisations, including those involved in the delivery of assistance to Ukraine,” said Paul Chichester, NCSC Director of Operations. “The UK and partners are committed to raising awareness of the tactics being deployed. We strongly encourage organisations to familiarise themselves with the threat and mitigation advice included in the advisory to help defend their networks.” The NCSC’s latest warning comes a couple of weeks after the cyber body’s CEO, Richard Horne, talked of a “direct connection” between Russian cyber attacks and physical threats to the UK at its annual conference. Horne told an audience at the CyberUK event that Russia was focusing on acts of sabotage, often involving criminal proxies. He said these threats, which are thought to have included arson attacks, are now manifesting on the streets of the UK, “putting lives, critical services and national security” at risk. Rafe Pilling, director of threat intelligence at the SophosCounter Threat Unit– which tracks Fancy Bear as Iron Twilight – said that the group's targeting of spear-phishing and vulnerability exploitation to gain access to target mailboxes had been a staple tactic for some time. “The focus of their operations pivots as the intelligence collection of the Russian military change and since 2022 Ukraine has been a significant focus of their attention. The targeting of Nato  and Ukranian defense and logistics companies involved in the support of the Ukrainian war effort makes a lot of sense in that context,” Pilling told Computer Weekly.   “The targeting of IP cameras for intelligence collection purposes is interesting and is a tactic generally associated with state-sponsored adversaries like Iron Twilight where they anticipate a physical effects aspect to their operations. As an intelligence provider to the Russian military this access would assist in the understanding of what goods were being transported, when, in what volumes and support kinetic targeting.   “We've seen other APT actors make use of compromised CCTV feeds to monitor the effects of cyber-physical attacks, for example the 2022 attacks against steel mills in Iran where video from the CCTV feed was used to time the execution of the attack in an attempt to avoid harm to people at the site and confirm the damage being caused,” he added. The NCSC said Britain’s support for Ukraine remained “steadfast”. Having already committed £13bn in military aid, the UK this week announced 100 new sanctions on Russia targeting entities and organisations involved in its energy, financial and military systems. This comes in the wake of the largest drone attack on Ukraine staged so far during the three-year war, which Russian dictator Vladimir Putin launched mere hours before a scheduled call with US president Donald Trump. The full advisory – which can be read here – sets out Fancy Bear’s tactics, techniques and proceduresin its latest campaign in accordance with the Mitre ATT&CK framework, and also details a number of the common vulnerabilities and exposuresbeing used to attain initial access. Besides the UK and US, the advisory is cosigned by cyber and national security agencies from Australia, Canada, Czechia, Denmark, Estonia, France, Germany, the Netherlands and Poland. about Russian state cyber campaigns Russia is using phishing attacks to compromise encrypted Signal Messenger services used by targets in the Ukraine. Experts warn that other encrypted app users are at risk. The Russian cyber spy operation known as Star Blizzard changed tactics after a takedown operation by Microsoft and the US authorities, turning to widely used messaging platform WhatsApp to try to ensnare its targets. Computer Weekly talks to GCHQ’s National Cyber Security Centre operations director Paul Chichester and former NCSC chief executive Ciaran Martin on Russia, China and Salt Typhoon. #ncsc #russias #fancy #bear #targeting
    WWW.COMPUTERWEEKLY.COM
    NCSC: Russia’s Fancy Bear targeting logistics, tech organisations
    As Russia continues its relentless assaults on Ukraine despite in defiance of continuing efforts to work towards a peace deal, multiple western security agencies have issued a new advisory warning of a Moscow-backed  campaign of cyber intrusions targeting logistics and technology organisations in the west. The campaign, run through Unit 26165 of the Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU), better known as Fancy Bear, includes credential guessing, spear-phishing attacks, exploitation Microsoft Exchange and Roundcube vulnerabilities, and flaws in public-facing infrastructure including VPNs. This pattern of activity likely dates back to the early days of the war in February 2022 – at which point Fancy Bear was more heavily involved in cyber operations for purposes of espionage. However, as Russia failed to achieve its military objectives as quickly as it had wanted, the group expanded its targeting to include entities involved in the delivery of support and aid to Ukraine’s defence. Over the past three years its victims have included organisations involved in air traffic control, airports, defence, IT services, maritime and port systems sectors across various Nato countries. The advanced persistent threat (APT) actor is also understood to be targeting internet-connected cameras at Ukraine’s border crossings and around its military bases. These intrusions mostly took place in Ukraine but have also been observed in neighbouring states including Hungary, Poland, Romania and Slovakia. The GCHQ-run National Cyber Security Centre (NCSC) urged UK organisations to familiarise themselves with Unit 26165’s tactics and take action to safeguard themselves. “This malicious campaign by Russia’s military intelligence service presents a serious risk to targeted organisations, including those involved in the delivery of assistance to Ukraine,” said Paul Chichester, NCSC Director of Operations. “The UK and partners are committed to raising awareness of the tactics being deployed. We strongly encourage organisations to familiarise themselves with the threat and mitigation advice included in the advisory to help defend their networks.” The NCSC’s latest warning comes a couple of weeks after the cyber body’s CEO, Richard Horne, talked of a “direct connection” between Russian cyber attacks and physical threats to the UK at its annual conference. Horne told an audience at the CyberUK event that Russia was focusing on acts of sabotage, often involving criminal proxies. He said these threats, which are thought to have included arson attacks, are now manifesting on the streets of the UK, “putting lives, critical services and national security” at risk. Rafe Pilling, director of threat intelligence at the Sophos (formerly Secureworks) Counter Threat Unit (CTU) – which tracks Fancy Bear as Iron Twilight – said that the group's targeting of spear-phishing and vulnerability exploitation to gain access to target mailboxes had been a staple tactic for some time. “The focus of their operations pivots as the intelligence collection of the Russian military change and since 2022 Ukraine has been a significant focus of their attention. The targeting of Nato  and Ukranian defense and logistics companies involved in the support of the Ukrainian war effort makes a lot of sense in that context,” Pilling told Computer Weekly.   “The targeting of IP cameras for intelligence collection purposes is interesting and is a tactic generally associated with state-sponsored adversaries like Iron Twilight where they anticipate a physical effects aspect to their operations. As an intelligence provider to the Russian military this access would assist in the understanding of what goods were being transported, when, in what volumes and support kinetic targeting.   “We've seen other APT actors make use of compromised CCTV feeds to monitor the effects of cyber-physical attacks, for example the 2022 attacks against steel mills in Iran where video from the CCTV feed was used to time the execution of the attack in an attempt to avoid harm to people at the site and confirm the damage being caused,” he added. The NCSC said Britain’s support for Ukraine remained “steadfast”. Having already committed £13bn in military aid, the UK this week announced 100 new sanctions on Russia targeting entities and organisations involved in its energy, financial and military systems. This comes in the wake of the largest drone attack on Ukraine staged so far during the three-year war, which Russian dictator Vladimir Putin launched mere hours before a scheduled call with US president Donald Trump. The full advisory – which can be read here – sets out Fancy Bear’s tactics, techniques and procedures (TTPs) in its latest campaign in accordance with the Mitre ATT&CK framework, and also details a number of the common vulnerabilities and exposures (CVEs) being used to attain initial access. Besides the UK and US, the advisory is cosigned by cyber and national security agencies from Australia, Canada, Czechia, Denmark, Estonia, France, Germany, the Netherlands and Poland. Read more about Russian state cyber campaigns Russia is using phishing attacks to compromise encrypted Signal Messenger services used by targets in the Ukraine. Experts warn that other encrypted app users are at risk. The Russian cyber spy operation known as Star Blizzard changed tactics after a takedown operation by Microsoft and the US authorities, turning to widely used messaging platform WhatsApp to try to ensnare its targets. Computer Weekly talks to GCHQ’s National Cyber Security Centre operations director Paul Chichester and former NCSC chief executive Ciaran Martin on Russia, China and Salt Typhoon.
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com