• Anthropic launches new Claude service for military and intelligence use

    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More:
    #anthropic #launches #new #claude #service
    Anthropic launches new Claude service for military and intelligence use
    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More: #anthropic #launches #new #claude #service
    WWW.THEVERGE.COM
    Anthropic launches new Claude service for military and intelligence use
    Anthropic on Thursday announced Claude Gov, its product designed specifically for U.S. defense and intelligence agencies. The AI models have looser guardrails for government use and are trained to better analyze classified information.The company said the models it’s announcing “are already deployed by agencies at the highest level of U.S. national security,” and that access to those models will be limited to government agencies handling classified information. The company did not confirm how long they had been in use.Claude Gov models are specifically designed to uniquely handle government needs, like threat assessment and intelligence analysis, per Anthropic’s blog post. And although the company said they “underwent the same rigorous safety testing as all of our Claude models,” the models have certain specifications for national security work. For example, they “refuse less when engaging with classified information” that’s fed into them, something consumer-facing Claude is trained to flag and avoid. Claude Gov’s models also have greater understanding of documents and context within defense and intelligence, according to Anthropic, and better proficiency in languages and dialects relevant to national security. Use of AI by government agencies has long been scrutinized because of its potential harms and ripple effects for minorities and vulnerable communities. There’s been a long list of wrongful arrests across multiple U.S. states due to police use of facial recognition, documented evidence of bias in predictive policing, and discrimination in government algorithms that assess welfare aid. For years, there’s also been an industry-wide controversy over large tech companies like Microsoft, Google and Amazon allowing the military — particularly in Israel — to use their AI products, with campaigns and public protests under the No Tech for Apartheid movement.Anthropic’s usage policy specifically dictates that any user must “Not Create or Facilitate the Exchange of Illegal or Highly Regulated Weapons or Goods,” including using Anthropic’s products or services to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life.” At least eleven months ago, the company said it created a set of contractual exceptions to its usage policy that are “carefully calibrated to enable beneficial uses by carefully selected government agencies.” Certain restrictions — such as disinformation campaigns, the design or use of weapons, the construction of censorship systems, and malicious cyber operations — would remain prohibited. But Anthropic can decide to “tailor use restrictions to the mission and legal authorities of a government entity,” although it will aim to “balance enabling beneficial uses of our products and services with mitigating potential harms.” Claude Gov is Anthropic’s answer to ChatGPT Gov, OpenAI’s product for U.S. government agencies, which it launched in January. It’s also part of a broader trend of AI giants and startups alike looking to bolster their businesses with government agencies, especially in an uncertain regulatory landscape.When OpenAI announced ChatGPT Gov, the company said that within the past year, more than 90,000 employees of federal, state, and local governments had used its technology to translate documents, generate summaries, draft policy memos, write code, build applications, and more. Anthropic declined to share numbers or use cases of the same sort, but the company is part of Palantir’s FedStart program, a SaaS offering for companies who want to deploy federal government-facing software. Scale AI, the AI giant that provides training data to industry leaders like OpenAI, Google, Microsoft, and Meta, signed a deal with the Department of Defense in March for a first-of-its-kind AI agent program for U.S. military planning. And since then, it’s expanded its business to world governments, recently inking a five-year deal with Qatar to provide automation tools for civil service, healthcare, transportation, and more.See More:
    Like
    Love
    Wow
    Angry
    Sad
    682
    0 Comments 0 Shares 0 Reviews
  • Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying

    DanaBot

    Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying

    An example of how a single malware operation can enable both criminal and state-sponsored hacking.

    Andy Greenberg, WIRED.com



    May 23, 2025 3:56 pm

    |

    0

    Credit:

    Getty Images

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    The hacker ecosystem in Russia, more than perhaps anywhere else in the world, has long blurred the lines between cybercrime, state-sponsored cyberwarfare, and espionage. Now an indictment of a group of Russian nationals and the takedown of their sprawling botnet offers the clearest example in years of how a single malware operation allegedly enabled hacking operations as varied as ransomware, wartime cyberattacks in Ukraine, and spying against foreign governments.
    The US Department of Justice today announced criminal charges today against 16 individuals law enforcement authorities have linked to a malware operation known as DanaBot, which according to a complaint infected at least 300,000 machines around the world. The DOJ’s announcement of the charges describes the group as “Russia-based,” and names two of the suspects, Aleksandr Stepanov and Artem Aleksandrovich Kalinkin, as living in Novosibirsk, Russia. Five other suspects are named in the indictment, while another nine are identified only by their pseudonyms. In addition to those charges, the Justice Department says the Defense Criminal Investigative Service—a criminal investigation arm of the Department of Defense—carried out seizures of DanaBot infrastructure around the world, including in the US.
    Aside from alleging how DanaBot was used in for-profit criminal hacking, the indictment also makes a rarer claim—it describes how a second variant of the malware it says was used in espionage against military, government, and NGO targets. “Pervasive malware like DanaBot harms hundreds of thousands of victims around the world, including sensitive military, diplomatic, and government entities, and causes many millions of dollars in losses,” US attorney Bill Essayli wrote in a statement.
    Since 2018, DanaBot—described in the criminal complaint as “incredibly invasive malware”—has infected millions of computers around the world, initially as a banking trojan designed to steal directly from those PCs' owners with modular features designed for credit card and cryptocurrency theft. Because its creators allegedly sold it in an “affiliate” model that made it available to other hacker groups for to a month, however, it was soon used as a tool to install different forms of malware in a broad array of operations, including ransomware. Its targets, too, quickly spread from initial victims in Ukraine, Poland, Italy, Germany, Austria, and Australia to US and Canadian financial institutions, according to an analysis of the operation by cybersecurity firm Crowdstrike.

    At one point in 2021, according to Crowdstrike, Danabot was used in a software supply-chain attack that hid the malware in a JavaScript coding tool called NPM with millions of weekly downloads. Crowdstrike found victims of that compromised tool across the financial service, transportation, technology, and media industries.
    That scale and the wide variety of its criminal uses made DanaBot “a juggernaut of the e-crime landscape,” according to Selena Larson, a staff threat researcher at cybersecurity firm Proofpoint.
    More uniquely, though, DanaBot has also been used at times for hacking campaigns that appear to be state-sponsored or linked to Russian government agency interests. In 2019 and 2020, it was used to target a handful of Western government officials in apparent espionage operations, according to the DOJ's indictment. According to Proofpoint, the malware in those instances was delivered in phishing messages that impersonated the Organization for Security and Cooperation in Europe and a Kazakhstan government entity.
    Then, in the early weeks of Russia's full-scale invasion of Ukraine, which began in February 2022, DanaBot was used to install a distributed denial-of-servicetool onto infected machines and launch attacks against the webmail server of the Ukrainian Ministry of Defense and National Security and Defense Council of Ukraine.
    All of that makes DanaBot a particularly clear example of how cybercriminal malware has allegedly been adopted by Russian state hackers, Proofpoint's Larson says. “There have been a lot of suggestions historically of cybercriminal operators palling around with Russian government entities, but there hasn't been a lot of public reporting on these increasingly blurred lines,” says Larson. The case of DanaBot, she says, “is pretty notable, because it's public evidence of this overlap where we see e-crime tooling used for espionage purposes.”

    In the criminal complaint, DCIS investigator Elliott Peterson—a former FBI agent known for his work on the investigation into the creators of the Mirai botnet—alleges that some members of the DanaBot operation were identified after they infected their own computers with the malware. Those infections may have been for the purposes of testing the trojan, or may have been accidental, according to Peterson. Either way, they resulted in identifying information about the alleged hackers ending up on DanaBot infrastructure that DCIS later seized. “The inadvertent infections often resulted in sensitive and compromising data being stolen from the actor's computer by the malware and stored on DanaBot servers, including data that helped identify members of the DanaBot organization,” Peterson writes.
    The operators of DanaBot remain at large, but the takedown of a large-scale tool in so many forms of Russian-origin hacking—both state-sponsored and criminal—represents a significant milestone, says Adam Meyers, who leads threat intelligence research at Crowdstrike.
    “Every time you disrupt a multiyear operation, you're impacting their ability to monetize it. It also creates a bit of a vacuum, and somebody else is going to step up and take that place,” Meyers says. “But the more we can disrupt them, the more we keep them on their back heels. We should rinse and repeat and go find the next target.”
    This story originally appeared at wired.com

    Andy Greenberg, WIRED.com

    Wired.com is your essential daily guide to what's next, delivering the most original and complete take you'll find anywhere on innovation's impact on technology, science, business and culture.

    0 Comments
    #feds #charge #russians #allegedly #tied
    Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying
    DanaBot Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying An example of how a single malware operation can enable both criminal and state-sponsored hacking. Andy Greenberg, WIRED.com – May 23, 2025 3:56 pm | 0 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The hacker ecosystem in Russia, more than perhaps anywhere else in the world, has long blurred the lines between cybercrime, state-sponsored cyberwarfare, and espionage. Now an indictment of a group of Russian nationals and the takedown of their sprawling botnet offers the clearest example in years of how a single malware operation allegedly enabled hacking operations as varied as ransomware, wartime cyberattacks in Ukraine, and spying against foreign governments. The US Department of Justice today announced criminal charges today against 16 individuals law enforcement authorities have linked to a malware operation known as DanaBot, which according to a complaint infected at least 300,000 machines around the world. The DOJ’s announcement of the charges describes the group as “Russia-based,” and names two of the suspects, Aleksandr Stepanov and Artem Aleksandrovich Kalinkin, as living in Novosibirsk, Russia. Five other suspects are named in the indictment, while another nine are identified only by their pseudonyms. In addition to those charges, the Justice Department says the Defense Criminal Investigative Service—a criminal investigation arm of the Department of Defense—carried out seizures of DanaBot infrastructure around the world, including in the US. Aside from alleging how DanaBot was used in for-profit criminal hacking, the indictment also makes a rarer claim—it describes how a second variant of the malware it says was used in espionage against military, government, and NGO targets. “Pervasive malware like DanaBot harms hundreds of thousands of victims around the world, including sensitive military, diplomatic, and government entities, and causes many millions of dollars in losses,” US attorney Bill Essayli wrote in a statement. Since 2018, DanaBot—described in the criminal complaint as “incredibly invasive malware”—has infected millions of computers around the world, initially as a banking trojan designed to steal directly from those PCs' owners with modular features designed for credit card and cryptocurrency theft. Because its creators allegedly sold it in an “affiliate” model that made it available to other hacker groups for to a month, however, it was soon used as a tool to install different forms of malware in a broad array of operations, including ransomware. Its targets, too, quickly spread from initial victims in Ukraine, Poland, Italy, Germany, Austria, and Australia to US and Canadian financial institutions, according to an analysis of the operation by cybersecurity firm Crowdstrike. At one point in 2021, according to Crowdstrike, Danabot was used in a software supply-chain attack that hid the malware in a JavaScript coding tool called NPM with millions of weekly downloads. Crowdstrike found victims of that compromised tool across the financial service, transportation, technology, and media industries. That scale and the wide variety of its criminal uses made DanaBot “a juggernaut of the e-crime landscape,” according to Selena Larson, a staff threat researcher at cybersecurity firm Proofpoint. More uniquely, though, DanaBot has also been used at times for hacking campaigns that appear to be state-sponsored or linked to Russian government agency interests. In 2019 and 2020, it was used to target a handful of Western government officials in apparent espionage operations, according to the DOJ's indictment. According to Proofpoint, the malware in those instances was delivered in phishing messages that impersonated the Organization for Security and Cooperation in Europe and a Kazakhstan government entity. Then, in the early weeks of Russia's full-scale invasion of Ukraine, which began in February 2022, DanaBot was used to install a distributed denial-of-servicetool onto infected machines and launch attacks against the webmail server of the Ukrainian Ministry of Defense and National Security and Defense Council of Ukraine. All of that makes DanaBot a particularly clear example of how cybercriminal malware has allegedly been adopted by Russian state hackers, Proofpoint's Larson says. “There have been a lot of suggestions historically of cybercriminal operators palling around with Russian government entities, but there hasn't been a lot of public reporting on these increasingly blurred lines,” says Larson. The case of DanaBot, she says, “is pretty notable, because it's public evidence of this overlap where we see e-crime tooling used for espionage purposes.” In the criminal complaint, DCIS investigator Elliott Peterson—a former FBI agent known for his work on the investigation into the creators of the Mirai botnet—alleges that some members of the DanaBot operation were identified after they infected their own computers with the malware. Those infections may have been for the purposes of testing the trojan, or may have been accidental, according to Peterson. Either way, they resulted in identifying information about the alleged hackers ending up on DanaBot infrastructure that DCIS later seized. “The inadvertent infections often resulted in sensitive and compromising data being stolen from the actor's computer by the malware and stored on DanaBot servers, including data that helped identify members of the DanaBot organization,” Peterson writes. The operators of DanaBot remain at large, but the takedown of a large-scale tool in so many forms of Russian-origin hacking—both state-sponsored and criminal—represents a significant milestone, says Adam Meyers, who leads threat intelligence research at Crowdstrike. “Every time you disrupt a multiyear operation, you're impacting their ability to monetize it. It also creates a bit of a vacuum, and somebody else is going to step up and take that place,” Meyers says. “But the more we can disrupt them, the more we keep them on their back heels. We should rinse and repeat and go find the next target.” This story originally appeared at wired.com Andy Greenberg, WIRED.com Wired.com is your essential daily guide to what's next, delivering the most original and complete take you'll find anywhere on innovation's impact on technology, science, business and culture. 0 Comments #feds #charge #russians #allegedly #tied
    ARSTECHNICA.COM
    Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying
    DanaBot Feds charge 16 Russians allegedly tied to botnets used in cyberattacks and spying An example of how a single malware operation can enable both criminal and state-sponsored hacking. Andy Greenberg, WIRED.com – May 23, 2025 3:56 pm | 0 Credit: Getty Images Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more The hacker ecosystem in Russia, more than perhaps anywhere else in the world, has long blurred the lines between cybercrime, state-sponsored cyberwarfare, and espionage. Now an indictment of a group of Russian nationals and the takedown of their sprawling botnet offers the clearest example in years of how a single malware operation allegedly enabled hacking operations as varied as ransomware, wartime cyberattacks in Ukraine, and spying against foreign governments. The US Department of Justice today announced criminal charges today against 16 individuals law enforcement authorities have linked to a malware operation known as DanaBot, which according to a complaint infected at least 300,000 machines around the world. The DOJ’s announcement of the charges describes the group as “Russia-based,” and names two of the suspects, Aleksandr Stepanov and Artem Aleksandrovich Kalinkin, as living in Novosibirsk, Russia. Five other suspects are named in the indictment, while another nine are identified only by their pseudonyms. In addition to those charges, the Justice Department says the Defense Criminal Investigative Service (DCIS)—a criminal investigation arm of the Department of Defense—carried out seizures of DanaBot infrastructure around the world, including in the US. Aside from alleging how DanaBot was used in for-profit criminal hacking, the indictment also makes a rarer claim—it describes how a second variant of the malware it says was used in espionage against military, government, and NGO targets. “Pervasive malware like DanaBot harms hundreds of thousands of victims around the world, including sensitive military, diplomatic, and government entities, and causes many millions of dollars in losses,” US attorney Bill Essayli wrote in a statement. Since 2018, DanaBot—described in the criminal complaint as “incredibly invasive malware”—has infected millions of computers around the world, initially as a banking trojan designed to steal directly from those PCs' owners with modular features designed for credit card and cryptocurrency theft. Because its creators allegedly sold it in an “affiliate” model that made it available to other hacker groups for $3,000 to $4,000 a month, however, it was soon used as a tool to install different forms of malware in a broad array of operations, including ransomware. Its targets, too, quickly spread from initial victims in Ukraine, Poland, Italy, Germany, Austria, and Australia to US and Canadian financial institutions, according to an analysis of the operation by cybersecurity firm Crowdstrike. At one point in 2021, according to Crowdstrike, Danabot was used in a software supply-chain attack that hid the malware in a JavaScript coding tool called NPM with millions of weekly downloads. Crowdstrike found victims of that compromised tool across the financial service, transportation, technology, and media industries. That scale and the wide variety of its criminal uses made DanaBot “a juggernaut of the e-crime landscape,” according to Selena Larson, a staff threat researcher at cybersecurity firm Proofpoint. More uniquely, though, DanaBot has also been used at times for hacking campaigns that appear to be state-sponsored or linked to Russian government agency interests. In 2019 and 2020, it was used to target a handful of Western government officials in apparent espionage operations, according to the DOJ's indictment. According to Proofpoint, the malware in those instances was delivered in phishing messages that impersonated the Organization for Security and Cooperation in Europe and a Kazakhstan government entity. Then, in the early weeks of Russia's full-scale invasion of Ukraine, which began in February 2022, DanaBot was used to install a distributed denial-of-service (DDoS) tool onto infected machines and launch attacks against the webmail server of the Ukrainian Ministry of Defense and National Security and Defense Council of Ukraine. All of that makes DanaBot a particularly clear example of how cybercriminal malware has allegedly been adopted by Russian state hackers, Proofpoint's Larson says. “There have been a lot of suggestions historically of cybercriminal operators palling around with Russian government entities, but there hasn't been a lot of public reporting on these increasingly blurred lines,” says Larson. The case of DanaBot, she says, “is pretty notable, because it's public evidence of this overlap where we see e-crime tooling used for espionage purposes.” In the criminal complaint, DCIS investigator Elliott Peterson—a former FBI agent known for his work on the investigation into the creators of the Mirai botnet—alleges that some members of the DanaBot operation were identified after they infected their own computers with the malware. Those infections may have been for the purposes of testing the trojan, or may have been accidental, according to Peterson. Either way, they resulted in identifying information about the alleged hackers ending up on DanaBot infrastructure that DCIS later seized. “The inadvertent infections often resulted in sensitive and compromising data being stolen from the actor's computer by the malware and stored on DanaBot servers, including data that helped identify members of the DanaBot organization,” Peterson writes. The operators of DanaBot remain at large, but the takedown of a large-scale tool in so many forms of Russian-origin hacking—both state-sponsored and criminal—represents a significant milestone, says Adam Meyers, who leads threat intelligence research at Crowdstrike. “Every time you disrupt a multiyear operation, you're impacting their ability to monetize it. It also creates a bit of a vacuum, and somebody else is going to step up and take that place,” Meyers says. “But the more we can disrupt them, the more we keep them on their back heels. We should rinse and repeat and go find the next target.” This story originally appeared at wired.com Andy Greenberg, WIRED.com Wired.com is your essential daily guide to what's next, delivering the most original and complete take you'll find anywhere on innovation's impact on technology, science, business and culture. 0 Comments
    0 Comments 0 Shares 0 Reviews
  • DanaBot Malware Devs Infected Their Own PCs

    The U.S. unsealed charges against 16 individuals behind DanaBot, a malware-as-a-service platform responsible for over million in global losses. "The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware," reports KrebsOnSecurity. From the report: Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud. Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between and a month for access to the information stealer platform. The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. "JimmBee," and Artem Aleksandrovich Kalinkin, 34, a.k.a. "Onix," both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is "Maffiozi."

    According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot -- emerging in January 2021 -- was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia. The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.

    "In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware," the criminal complaint reads. "In other cases, the infections seemed to be inadvertent -- one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake." A statement from the DOJ says that as part of today's operation, agents with the Defense Criminal Investigative Serviceseized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYRMU, and ZScaler.

    of this story at Slashdot.
    #danabot #malware #devs #infected #their
    DanaBot Malware Devs Infected Their Own PCs
    The U.S. unsealed charges against 16 individuals behind DanaBot, a malware-as-a-service platform responsible for over million in global losses. "The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware," reports KrebsOnSecurity. From the report: Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud. Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between and a month for access to the information stealer platform. The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. "JimmBee," and Artem Aleksandrovich Kalinkin, 34, a.k.a. "Onix," both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is "Maffiozi." According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot -- emerging in January 2021 -- was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia. The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds. "In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware," the criminal complaint reads. "In other cases, the infections seemed to be inadvertent -- one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake." A statement from the DOJ says that as part of today's operation, agents with the Defense Criminal Investigative Serviceseized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYRMU, and ZScaler. of this story at Slashdot. #danabot #malware #devs #infected #their
    IT.SLASHDOT.ORG
    DanaBot Malware Devs Infected Their Own PCs
    The U.S. unsealed charges against 16 individuals behind DanaBot, a malware-as-a-service platform responsible for over $50 million in global losses. "The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware," reports KrebsOnSecurity. From the report: Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud. Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform. The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. "JimmBee," and Artem Aleksandrovich Kalinkin, 34, a.k.a. "Onix," both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is "Maffiozi." According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot -- emerging in January 2021 -- was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia. The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds. "In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware," the criminal complaint reads. "In other cases, the infections seemed to be inadvertent -- one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake." A statement from the DOJ says that as part of today's operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYRMU, and ZScaler. Read more of this story at Slashdot.
    0 Comments 0 Shares 0 Reviews
  • Feds Charge 16 Russians Allegedly Tied to Botnets Used in Ransomware, Cyberattacks, and Spying

    A new US indictment against a group of Russian nationals offers a clear example of how, authorities say, a single malware operation can enable both criminal and state-sponsored hacking.
    #feds #charge #russians #allegedly #tied
    Feds Charge 16 Russians Allegedly Tied to Botnets Used in Ransomware, Cyberattacks, and Spying
    A new US indictment against a group of Russian nationals offers a clear example of how, authorities say, a single malware operation can enable both criminal and state-sponsored hacking. #feds #charge #russians #allegedly #tied
    WWW.WIRED.COM
    Feds Charge 16 Russians Allegedly Tied to Botnets Used in Ransomware, Cyberattacks, and Spying
    A new US indictment against a group of Russian nationals offers a clear example of how, authorities say, a single malware operation can enable both criminal and state-sponsored hacking.
    0 Comments 0 Shares 0 Reviews
  • The Epic Rise and Fall of a Dark-Web Psychedelics Kingpin

    Interdimensional travel, sex with aliens, communion with God. Anything is possible with just a sprinkle of DMT. Akasha Song’s secret labs made millions of doses—and dollars—until the feds showed up.
    #epic #rise #fall #darkweb #psychedelics
    The Epic Rise and Fall of a Dark-Web Psychedelics Kingpin
    Interdimensional travel, sex with aliens, communion with God. Anything is possible with just a sprinkle of DMT. Akasha Song’s secret labs made millions of doses—and dollars—until the feds showed up. #epic #rise #fall #darkweb #psychedelics
    WWW.WIRED.COM
    The Epic Rise and Fall of a Dark-Web Psychedelics Kingpin
    Interdimensional travel, sex with aliens, communion with God. Anything is possible with just a sprinkle of DMT. Akasha Song’s secret labs made millions of doses—and dollars—until the feds showed up.
    0 Comments 0 Shares 0 Reviews
  • American kids are being poisoned by lead. Trump is letting it happen.

    For many months now, the city of Milwaukee has been grappling with a lead poisoning crisis that has forced at least four schools to temporarily close and dozens more to undergo rigorous inspections.It began on January 13, when Milwaukee first notified parents at one grade three to five school that a child had tested positive for high levels of lead in their blood. Local health officials determined the lead exposure did not occur at the child’s home, which left their school as the obvious culprit. City investigators found chipped lead paint and lead-laden dust throughout the school building; press and government reports indicate that the school district has struggled to keep up with paint maintenance requests, due to a lack of funding and manpower. Local officials soon realized they had a big problem on their hands, as the vast majority of the city’s school buildingswere built before 1978, when lead paint was banned. Lead, a dangerous neurotoxin that can lead to development problems in children after prolonged exposure, has now been detected in at least nine public schools, and at least four students have tested positive for high lead levels in their blood. So far, no children have been hospitalized for acute lead poisoning, which can be life-threatening, but the affected kids continue to be monitored. Several buildings have been temporarily closed so workers can do a deep clean. Milwaukee has been inspecting all of its public schools for lead, with the goal of completing the review by September.Normally, cities navigating such a crisis could depend on the Centers for Disease Control and Prevention for federal support. When the lead poisoning was first detected in January, at the tail end of the Biden administration, city health officials were immediately in contact with the CDC environmental health team, which included several of the country’s top lead poisoning experts, Milwaukee health commissioner Mike Totoraitis told me. A group of federal experts were planning a trip to the city at the end of April.But not anymore. In early April, the Trump administration denied Milwaukee’s request for support because there was no longer anybody on the government’s payroll who could provide the lead poisoning expertise the city needs.On April 1, the lead exposure team within the CDC’s National Center for Environmental Health was laid off as part of Health Secretary Robert F. Kennedy Jr.’s massive restructuring of the federal health department. The planned trip was canceled, and no federal officials have stepped foot in Milwaukee since to aid in the response. “We were talking tomultiple times each week,” Totoraitis said, “before they were let go.”Milwaukee has pushed ahead with its own inspection and free blood testing clinics. The city reported on May 13 that it had replaced 10,000 lead water service lines, in an attempt to remove another possible source of exposure for local children. But they still have 55,000 more left to go, and local officials have said they would need state or federal funding to finish the job.Ordinarily, Totoraitis said, the CDC experts would serve as the city’s subject matter experts, guiding them through their epidemiological investigations. Federal officials are especially adept at the detective work that can determine whether a child was exposed at home or at the school. Milwaukee officials had recent experience with lead exposures in homes but not in schools; they were relying on federal expertise to interpret lead dust levels that were found during the school inspections. Without them, they’ve been left to navigate a novel and dangerous health threat on their own.“They were there for that sole purpose of having some of the best subject matter expertise on lead poisoning, and it’s gone now,” Totoraitis said. “Now we don’t have any experts at the CDC to reach out to.”In this uncertain new era for public health, Milwaukee’s experience may become all too common: a city left to fend for itself amid an emergency. What in the past might have been a national scandal could become all too routine.This is what happens when the federal government won’t respond to a health crisisWhen I spoke with Totoraitis, he was already contemplating the next public health problem he would have to deal with. “If we have a new emerging health issue, that I don’t have internal expertise on and neither does the state, we don’t have anyone to call now,” Totoraitis said. “That’s a scary endeavor.”He can’t be sure what kind of help he will be able to get from the federal government as the restructuring at the US Department of Health and Human Services continues. The department just rehired hundreds of health workers focused on workplace safety, but other teams, including the lead team, have not been brought back.The turmoil makes it harder for local officials to keep track of which federal experts are still on staff, where they are located, and who has actually been let go. But the message is clear: President Donald Trump and his senior deputies want state and local governments to take on more of these responsibilities — without a helping hand from the feds.The US public health system has been set up so that the state and local health departments are the front line, monitoring emerging problems and providing personnel in a crisis. The federal government supplies insights that state and local officials probably don’t have on their own. That is what Totoraitis was depending on; Milwaukee was inexperienced with lead exposures in large public buildings before this year’s emergency.Health crises happen all the time. Right now, there is a small tuberculosis outbreak in Kansas; a Florida town experienced the unexpected spread of hepatitis last December. A dozen people have been hospitalized in a listeria outbreak. And the US is currently facing its largest outbreak of measles in decades, with more than 1,000 people sickened. At one point, local officials said that the federal government had cut off funding for the outbreak response as part of a massive clawback of federal funds at the end of March, although the CDC has since sent additional workers to West Texas where the outbreak originated.There used to be little doubt the federal government would step up in these scenarios. But Totoraitis warns that Milwaukee’s experience of the past few months, left to fend for itself in an emergency, could soon be repeated elsewhere.“Let’s say next year this time, St. Louis is in a similar situation — they could call us, but we don’t have the bandwidth to consistently support them,” Totoraitis said. “This unfortunately is a great example of how quickly changes in the federal government can affect local government.”Kids are being poisoned by lead. Trump is letting it happen.Kennedy, Trump, and Elon Musk’s Department of Government Efficiency gleefully cut 10,000 jobs from US health agencies this spring. The cost of those losses will be felt every time a city is confronted with an unexpected health threat. Today, in Milwaukee, families are facing the fear and uncertainty of lead exposure — and they know federal help isn’t coming. As one Milwaukee mom told ABC News recently: “It really sends the message of, ‘You don’t matter.’”See More:
    #american #kids #are #being #poisoned
    American kids are being poisoned by lead. Trump is letting it happen.
    For many months now, the city of Milwaukee has been grappling with a lead poisoning crisis that has forced at least four schools to temporarily close and dozens more to undergo rigorous inspections.It began on January 13, when Milwaukee first notified parents at one grade three to five school that a child had tested positive for high levels of lead in their blood. Local health officials determined the lead exposure did not occur at the child’s home, which left their school as the obvious culprit. City investigators found chipped lead paint and lead-laden dust throughout the school building; press and government reports indicate that the school district has struggled to keep up with paint maintenance requests, due to a lack of funding and manpower. Local officials soon realized they had a big problem on their hands, as the vast majority of the city’s school buildingswere built before 1978, when lead paint was banned. Lead, a dangerous neurotoxin that can lead to development problems in children after prolonged exposure, has now been detected in at least nine public schools, and at least four students have tested positive for high lead levels in their blood. So far, no children have been hospitalized for acute lead poisoning, which can be life-threatening, but the affected kids continue to be monitored. Several buildings have been temporarily closed so workers can do a deep clean. Milwaukee has been inspecting all of its public schools for lead, with the goal of completing the review by September.Normally, cities navigating such a crisis could depend on the Centers for Disease Control and Prevention for federal support. When the lead poisoning was first detected in January, at the tail end of the Biden administration, city health officials were immediately in contact with the CDC environmental health team, which included several of the country’s top lead poisoning experts, Milwaukee health commissioner Mike Totoraitis told me. A group of federal experts were planning a trip to the city at the end of April.But not anymore. In early April, the Trump administration denied Milwaukee’s request for support because there was no longer anybody on the government’s payroll who could provide the lead poisoning expertise the city needs.On April 1, the lead exposure team within the CDC’s National Center for Environmental Health was laid off as part of Health Secretary Robert F. Kennedy Jr.’s massive restructuring of the federal health department. The planned trip was canceled, and no federal officials have stepped foot in Milwaukee since to aid in the response. “We were talking tomultiple times each week,” Totoraitis said, “before they were let go.”Milwaukee has pushed ahead with its own inspection and free blood testing clinics. The city reported on May 13 that it had replaced 10,000 lead water service lines, in an attempt to remove another possible source of exposure for local children. But they still have 55,000 more left to go, and local officials have said they would need state or federal funding to finish the job.Ordinarily, Totoraitis said, the CDC experts would serve as the city’s subject matter experts, guiding them through their epidemiological investigations. Federal officials are especially adept at the detective work that can determine whether a child was exposed at home or at the school. Milwaukee officials had recent experience with lead exposures in homes but not in schools; they were relying on federal expertise to interpret lead dust levels that were found during the school inspections. Without them, they’ve been left to navigate a novel and dangerous health threat on their own.“They were there for that sole purpose of having some of the best subject matter expertise on lead poisoning, and it’s gone now,” Totoraitis said. “Now we don’t have any experts at the CDC to reach out to.”In this uncertain new era for public health, Milwaukee’s experience may become all too common: a city left to fend for itself amid an emergency. What in the past might have been a national scandal could become all too routine.This is what happens when the federal government won’t respond to a health crisisWhen I spoke with Totoraitis, he was already contemplating the next public health problem he would have to deal with. “If we have a new emerging health issue, that I don’t have internal expertise on and neither does the state, we don’t have anyone to call now,” Totoraitis said. “That’s a scary endeavor.”He can’t be sure what kind of help he will be able to get from the federal government as the restructuring at the US Department of Health and Human Services continues. The department just rehired hundreds of health workers focused on workplace safety, but other teams, including the lead team, have not been brought back.The turmoil makes it harder for local officials to keep track of which federal experts are still on staff, where they are located, and who has actually been let go. But the message is clear: President Donald Trump and his senior deputies want state and local governments to take on more of these responsibilities — without a helping hand from the feds.The US public health system has been set up so that the state and local health departments are the front line, monitoring emerging problems and providing personnel in a crisis. The federal government supplies insights that state and local officials probably don’t have on their own. That is what Totoraitis was depending on; Milwaukee was inexperienced with lead exposures in large public buildings before this year’s emergency.Health crises happen all the time. Right now, there is a small tuberculosis outbreak in Kansas; a Florida town experienced the unexpected spread of hepatitis last December. A dozen people have been hospitalized in a listeria outbreak. And the US is currently facing its largest outbreak of measles in decades, with more than 1,000 people sickened. At one point, local officials said that the federal government had cut off funding for the outbreak response as part of a massive clawback of federal funds at the end of March, although the CDC has since sent additional workers to West Texas where the outbreak originated.There used to be little doubt the federal government would step up in these scenarios. But Totoraitis warns that Milwaukee’s experience of the past few months, left to fend for itself in an emergency, could soon be repeated elsewhere.“Let’s say next year this time, St. Louis is in a similar situation — they could call us, but we don’t have the bandwidth to consistently support them,” Totoraitis said. “This unfortunately is a great example of how quickly changes in the federal government can affect local government.”Kids are being poisoned by lead. Trump is letting it happen.Kennedy, Trump, and Elon Musk’s Department of Government Efficiency gleefully cut 10,000 jobs from US health agencies this spring. The cost of those losses will be felt every time a city is confronted with an unexpected health threat. Today, in Milwaukee, families are facing the fear and uncertainty of lead exposure — and they know federal help isn’t coming. As one Milwaukee mom told ABC News recently: “It really sends the message of, ‘You don’t matter.’”See More: #american #kids #are #being #poisoned
    WWW.VOX.COM
    American kids are being poisoned by lead. Trump is letting it happen.
    For many months now, the city of Milwaukee has been grappling with a lead poisoning crisis that has forced at least four schools to temporarily close and dozens more to undergo rigorous inspections.It began on January 13, when Milwaukee first notified parents at one grade three to five school that a child had tested positive for high levels of lead in their blood. Local health officials determined the lead exposure did not occur at the child’s home, which left their school as the obvious culprit. City investigators found chipped lead paint and lead-laden dust throughout the school building; press and government reports indicate that the school district has struggled to keep up with paint maintenance requests, due to a lack of funding and manpower. Local officials soon realized they had a big problem on their hands, as the vast majority of the city’s school buildings (roughly 125 out of 150) were built before 1978, when lead paint was banned. Lead, a dangerous neurotoxin that can lead to development problems in children after prolonged exposure, has now been detected in at least nine public schools, and at least four students have tested positive for high lead levels in their blood. So far, no children have been hospitalized for acute lead poisoning, which can be life-threatening, but the affected kids continue to be monitored. Several buildings have been temporarily closed so workers can do a deep clean. Milwaukee has been inspecting all of its public schools for lead, with the goal of completing the review by September.Normally, cities navigating such a crisis could depend on the Centers for Disease Control and Prevention for federal support. When the lead poisoning was first detected in January, at the tail end of the Biden administration, city health officials were immediately in contact with the CDC environmental health team, which included several of the country’s top lead poisoning experts, Milwaukee health commissioner Mike Totoraitis told me. A group of federal experts were planning a trip to the city at the end of April.But not anymore. In early April, the Trump administration denied Milwaukee’s request for support because there was no longer anybody on the government’s payroll who could provide the lead poisoning expertise the city needs.On April 1, the lead exposure team within the CDC’s National Center for Environmental Health was laid off as part of Health Secretary Robert F. Kennedy Jr.’s massive restructuring of the federal health department. The planned trip was canceled, and no federal officials have stepped foot in Milwaukee since to aid in the response. “We were talking to [the federal experts] multiple times each week,” Totoraitis said, “before they were let go.”Milwaukee has pushed ahead with its own inspection and free blood testing clinics. The city reported on May 13 that it had replaced 10,000 lead water service lines, in an attempt to remove another possible source of exposure for local children. But they still have 55,000 more left to go, and local officials have said they would need state or federal funding to finish the job. (It is estimated to cost the city about $630 million.)Ordinarily, Totoraitis said, the CDC experts would serve as the city’s subject matter experts, guiding them through their epidemiological investigations. Federal officials are especially adept at the detective work that can determine whether a child was exposed at home or at the school. Milwaukee officials had recent experience with lead exposures in homes but not in schools; they were relying on federal expertise to interpret lead dust levels that were found during the school inspections. Without them, they’ve been left to navigate a novel and dangerous health threat on their own.“They were there for that sole purpose of having some of the best subject matter expertise on lead poisoning, and it’s gone now,” Totoraitis said. “Now we don’t have any experts at the CDC to reach out to.”In this uncertain new era for public health, Milwaukee’s experience may become all too common: a city left to fend for itself amid an emergency. What in the past might have been a national scandal could become all too routine.This is what happens when the federal government won’t respond to a health crisisWhen I spoke with Totoraitis, he was already contemplating the next public health problem he would have to deal with. “If we have a new emerging health issue, that I don’t have internal expertise on and neither does the state, we don’t have anyone to call now,” Totoraitis said. “That’s a scary endeavor.”He can’t be sure what kind of help he will be able to get from the federal government as the restructuring at the US Department of Health and Human Services continues. The department just rehired hundreds of health workers focused on workplace safety, but other teams, including the lead team, have not been brought back.The turmoil makes it harder for local officials to keep track of which federal experts are still on staff, where they are located, and who has actually been let go. But the message is clear: President Donald Trump and his senior deputies want state and local governments to take on more of these responsibilities — without a helping hand from the feds.The US public health system has been set up so that the state and local health departments are the front line, monitoring emerging problems and providing personnel in a crisis. The federal government supplies insights that state and local officials probably don’t have on their own. That is what Totoraitis was depending on; Milwaukee was inexperienced with lead exposures in large public buildings before this year’s emergency. (One of the laid-off CDC scientists has since sought to volunteer to help Milwaukee, as Stat recently reported; the person told me they were hoping to help with community engagement, which federal officials would usually assist with.)Health crises happen all the time. Right now, there is a small tuberculosis outbreak in Kansas; a Florida town experienced the unexpected spread of hepatitis last December. A dozen people have been hospitalized in a listeria outbreak. And the US is currently facing its largest outbreak of measles in decades, with more than 1,000 people sickened. At one point, local officials said that the federal government had cut off funding for the outbreak response as part of a massive clawback of federal funds at the end of March, although the CDC has since sent additional workers to West Texas where the outbreak originated.There used to be little doubt the federal government would step up in these scenarios. But Totoraitis warns that Milwaukee’s experience of the past few months, left to fend for itself in an emergency, could soon be repeated elsewhere.“Let’s say next year this time, St. Louis is in a similar situation — they could call us, but we don’t have the bandwidth to consistently support them,” Totoraitis said. “This unfortunately is a great example of how quickly changes in the federal government can affect local government.”Kids are being poisoned by lead. Trump is letting it happen.Kennedy, Trump, and Elon Musk’s Department of Government Efficiency gleefully cut 10,000 jobs from US health agencies this spring. The cost of those losses will be felt every time a city is confronted with an unexpected health threat. Today, in Milwaukee, families are facing the fear and uncertainty of lead exposure — and they know federal help isn’t coming. As one Milwaukee mom told ABC News recently: “It really sends the message of, ‘You don’t matter.’”See More:
    0 Comments 0 Shares 0 Reviews
  • ChatGPT gave wildly inaccurate translations — to try and make users happy

    Enterprise IT leaders are becoming uncomfortably aware that generative AItechnology is still a work in progress and buying into it is like spending several billion dollars to participate in an alpha test— not even a beta test, but an early alpha, where coders can barely keep up with bug reports. 

    For people who remember the first three seasons of Saturday Night Live, genAI is the ultimate Not-Ready-for-Primetime algorithm. 

    One of the latest pieces of evidence for this comes from OpenAI, which had to sheepishly pull back a recent version of ChatGPTwhen it — among other things — delivered wildly inaccurate translations. 

    Lost in translation

    Why? In the words of a CTO who discovered the issue, “ChatGPT didn’t actually translate the document. It guessed what I wanted to hear, blending it with past conversations to make it feel legitimate. It didn’t just predict words. It predicted my expectations. That’s absolutely terrifying, as I truly believed it.”

    OpenAI said ChatGPT was just being too nice.

    “We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable — often described as sycophantic,” OpenAI explained, adding that in that “GPT‑4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks. We focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.

    “…Each of these desirable qualities, like attempting to be useful or supportive, can have unintended side effects. And with 500 million people using ChatGPT each week, across every culture and context, a single default can’t capture every preference.”

    OpenAI was being deliberately obtuse. The problem was not that the app was being too polite and well-mannered. This wasn’t an issue of it emulating Miss Manners.

    I am not being nice if you ask me to translate a document and I tell you what I think you want to hear. This is akin to Excel taking your financial figures and making the net income much larger because it thinks that will make you happy.

    In the same way that IT decision-makers expect Excel to calculate numbers accurately regardless of how it may impact our mood, they expect that the translation of a Chinese document doesn’t make stuff up.

    OpenAI can’t paper over this mess by saying that “desirable qualities like attempting to be useful or supportive can have unintended side effects.” Let’s be clear: giving people wrong answers will have the precisely expected effect — bad decisions.

    Yale: LLMs need data labeled as wrong

    Alas, OpenAI’s happiness efforts weren’t the only bizarre genAI news of late. Researchers at Yale University explored a fascinating theory: If an LLM is only trained on information that is labeled as being correct — whether or not the data is actually correct is not material — it has no chance of identifying flawed or highly unreliable data because it doesn’t know what it looks like. 

    In short, if it’s never been trained on data labeled as false, how could it possibly recognize it? 

    Even the US government is finding genAI claims going too far. And when the feds say a lie is going too far, that is quite a statement.

    FTC: GenAI vendor makes false, misleading claims

    The US Federal Trade Commissionfound that one large language modelvendor, Workado, was deceiving people with flawed claims of the accuracy of its LLM detection product. It wants that vendor to “maintain competent and reliable evidence showing those products are as accurate as claimed.”

    Customers “trusted Workado’s AI Content Detector to help them decipher whether AI was behind a piece of writing, but the product did no better than a coin toss,” said Chris Mufarrige, director of the FTC’s Bureau of Consumer Protection. “Misleading claims about AI undermine competition by making it harder for legitimate providers of AI-related products to reach consumers.

    “…The order settles allegations that Workado promoted its AI Content Detector as ‘98 percent’ accurate in detecting whether text was written by AI or human. But independent testing showed the accuracy rate on general-purpose content was just 53 percent,” according to the FTC’s administrative complaint. 

    “The FTC alleges that Workado violated the FTC Act because the ‘98 percent’ claim was false, misleading, or non-substantiated.”

    There is a critical lesson here for enterprise IT. GenAI vendors are making major claims for their products without meaningful documentation. You think genAI makes stuff up? Imagine what comes out of their vendors’ marketing departments. 
    #chatgpt #gave #wildly #inaccurate #translations
    ChatGPT gave wildly inaccurate translations — to try and make users happy
    Enterprise IT leaders are becoming uncomfortably aware that generative AItechnology is still a work in progress and buying into it is like spending several billion dollars to participate in an alpha test— not even a beta test, but an early alpha, where coders can barely keep up with bug reports.  For people who remember the first three seasons of Saturday Night Live, genAI is the ultimate Not-Ready-for-Primetime algorithm.  One of the latest pieces of evidence for this comes from OpenAI, which had to sheepishly pull back a recent version of ChatGPTwhen it — among other things — delivered wildly inaccurate translations.  Lost in translation Why? In the words of a CTO who discovered the issue, “ChatGPT didn’t actually translate the document. It guessed what I wanted to hear, blending it with past conversations to make it feel legitimate. It didn’t just predict words. It predicted my expectations. That’s absolutely terrifying, as I truly believed it.” OpenAI said ChatGPT was just being too nice. “We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable — often described as sycophantic,” OpenAI explained, adding that in that “GPT‑4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks. We focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous. “…Each of these desirable qualities, like attempting to be useful or supportive, can have unintended side effects. And with 500 million people using ChatGPT each week, across every culture and context, a single default can’t capture every preference.” OpenAI was being deliberately obtuse. The problem was not that the app was being too polite and well-mannered. This wasn’t an issue of it emulating Miss Manners. I am not being nice if you ask me to translate a document and I tell you what I think you want to hear. This is akin to Excel taking your financial figures and making the net income much larger because it thinks that will make you happy. In the same way that IT decision-makers expect Excel to calculate numbers accurately regardless of how it may impact our mood, they expect that the translation of a Chinese document doesn’t make stuff up. OpenAI can’t paper over this mess by saying that “desirable qualities like attempting to be useful or supportive can have unintended side effects.” Let’s be clear: giving people wrong answers will have the precisely expected effect — bad decisions. Yale: LLMs need data labeled as wrong Alas, OpenAI’s happiness efforts weren’t the only bizarre genAI news of late. Researchers at Yale University explored a fascinating theory: If an LLM is only trained on information that is labeled as being correct — whether or not the data is actually correct is not material — it has no chance of identifying flawed or highly unreliable data because it doesn’t know what it looks like.  In short, if it’s never been trained on data labeled as false, how could it possibly recognize it?  Even the US government is finding genAI claims going too far. And when the feds say a lie is going too far, that is quite a statement. FTC: GenAI vendor makes false, misleading claims The US Federal Trade Commissionfound that one large language modelvendor, Workado, was deceiving people with flawed claims of the accuracy of its LLM detection product. It wants that vendor to “maintain competent and reliable evidence showing those products are as accurate as claimed.” Customers “trusted Workado’s AI Content Detector to help them decipher whether AI was behind a piece of writing, but the product did no better than a coin toss,” said Chris Mufarrige, director of the FTC’s Bureau of Consumer Protection. “Misleading claims about AI undermine competition by making it harder for legitimate providers of AI-related products to reach consumers. “…The order settles allegations that Workado promoted its AI Content Detector as ‘98 percent’ accurate in detecting whether text was written by AI or human. But independent testing showed the accuracy rate on general-purpose content was just 53 percent,” according to the FTC’s administrative complaint.  “The FTC alleges that Workado violated the FTC Act because the ‘98 percent’ claim was false, misleading, or non-substantiated.” There is a critical lesson here for enterprise IT. GenAI vendors are making major claims for their products without meaningful documentation. You think genAI makes stuff up? Imagine what comes out of their vendors’ marketing departments.  #chatgpt #gave #wildly #inaccurate #translations
    WWW.COMPUTERWORLD.COM
    ChatGPT gave wildly inaccurate translations — to try and make users happy
    Enterprise IT leaders are becoming uncomfortably aware that generative AI (genAI) technology is still a work in progress and buying into it is like spending several billion dollars to participate in an alpha test— not even a beta test, but an early alpha, where coders can barely keep up with bug reports.  For people who remember the first three seasons of Saturday Night Live, genAI is the ultimate Not-Ready-for-Primetime algorithm.  One of the latest pieces of evidence for this comes from OpenAI, which had to sheepishly pull back a recent version of ChatGPT (GPT-4o) when it — among other things — delivered wildly inaccurate translations.  Lost in translation Why? In the words of a CTO who discovered the issue, “ChatGPT didn’t actually translate the document. It guessed what I wanted to hear, blending it with past conversations to make it feel legitimate. It didn’t just predict words. It predicted my expectations. That’s absolutely terrifying, as I truly believed it.” OpenAI said ChatGPT was just being too nice. “We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable — often described as sycophantic,” OpenAI explained, adding that in that “GPT‑4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks. We focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous. “…Each of these desirable qualities, like attempting to be useful or supportive, can have unintended side effects. And with 500 million people using ChatGPT each week, across every culture and context, a single default can’t capture every preference.” OpenAI was being deliberately obtuse. The problem was not that the app was being too polite and well-mannered. This wasn’t an issue of it emulating Miss Manners. I am not being nice if you ask me to translate a document and I tell you what I think you want to hear. This is akin to Excel taking your financial figures and making the net income much larger because it thinks that will make you happy. In the same way that IT decision-makers expect Excel to calculate numbers accurately regardless of how it may impact our mood, they expect that the translation of a Chinese document doesn’t make stuff up. OpenAI can’t paper over this mess by saying that “desirable qualities like attempting to be useful or supportive can have unintended side effects.” Let’s be clear: giving people wrong answers will have the precisely expected effect — bad decisions. Yale: LLMs need data labeled as wrong Alas, OpenAI’s happiness efforts weren’t the only bizarre genAI news of late. Researchers at Yale University explored a fascinating theory: If an LLM is only trained on information that is labeled as being correct — whether or not the data is actually correct is not material — it has no chance of identifying flawed or highly unreliable data because it doesn’t know what it looks like.  In short, if it’s never been trained on data labeled as false, how could it possibly recognize it? (The full study from Yale is here.)  Even the US government is finding genAI claims going too far. And when the feds say a lie is going too far, that is quite a statement. FTC: GenAI vendor makes false, misleading claims The US Federal Trade Commission (FTC) found that one large language model (LLM) vendor, Workado, was deceiving people with flawed claims of the accuracy of its LLM detection product. It wants that vendor to “maintain competent and reliable evidence showing those products are as accurate as claimed.” Customers “trusted Workado’s AI Content Detector to help them decipher whether AI was behind a piece of writing, but the product did no better than a coin toss,” said Chris Mufarrige, director of the FTC’s Bureau of Consumer Protection. “Misleading claims about AI undermine competition by making it harder for legitimate providers of AI-related products to reach consumers. “…The order settles allegations that Workado promoted its AI Content Detector as ‘98 percent’ accurate in detecting whether text was written by AI or human. But independent testing showed the accuracy rate on general-purpose content was just 53 percent,” according to the FTC’s administrative complaint.  “The FTC alleges that Workado violated the FTC Act because the ‘98 percent’ claim was false, misleading, or non-substantiated.” There is a critical lesson here for enterprise IT. GenAI vendors are making major claims for their products without meaningful documentation. You think genAI makes stuff up? Imagine what comes out of their vendors’ marketing departments. 
    0 Comments 0 Shares 0 Reviews
  • Microsoft hasn’t bowed to Trump — and the company is thriving




    Ever since Donald J.
    Trump was re-elected president, we’ve witnessed a disheartening spectacle: big tech companies bending their knees to him, hoping to get him to kill antitrust actions against them and defend them from European Union rules and fines. 
    Meta founder and CEO Mark Zuckerberg, Amazon founder and Executive Chairman Jeff Bezos, Google CEO Sundar Pichai, and Apple CEO Tim Cook have all in one way or another shown or declared support for the president’s agenda, especially his opposition to DEI (diversity, equity and inclusion) programs.
    Notably, all four attended his Jan.
    20 inauguration and were front and center during the proceedings. 
    Zuckerberg killed DEI efforts at Meta, abandoned attempts to contain misinformation on his services, makes regular pilgrimages to Mar-a-Lago, and called then-candidate Trump a “badass” after last year’s assassination attempt.
    He sounded like nothing so much as Trump himself (while using words of more than one syllable) when he told Joe Rogan on a podcast: “The corporate world is pretty culturally neutered.
    A culture that celebrates aggression a bit more has its own merits.
    Masculine energy, I think, is good.”
    Bezos killed DEI at Amazon.
    As the owner of The Washington Post he also squashed the newspaper’s endorsement of then-Vice President Kamala Harris last fall, killed a cartoon of tech leaders and Mickey Mouse bowing down to Trump, and ruled that the paper’s editorial and opinion pages will become right-wing, covering only “personal liberty and free markets,” with no opposing viewpoints allowed.
    Pichai killed off DEI efforts at Google and makes regular visits to see Trump in Florida.
    Cook is a bit of an outlier — although he attended the inauguration, he didn’t kill DEI at Apple and has made noises about working with Trump on tariff issues.
    The four companies haven’t gotten anything (yet) for their efforts; legal action against them begun under Trump’s predecessor are proceeding. Google faces being broken up after a judge ruled it illegally monopolized the advertising tech market. Meta is being prosecuted for illegally monopolizing the social media market by buying Instagram and What’s App and could be broken up as well. Amazon has been charged by the FTC with protecting its online retail monopoly by imposing fees on third-party sellers and favoring its own services over theirs.
    Apple has been sued by the Department of Justice for a variety of antitrust actions in protecting and extending its monopoly in the smartphone market.
    And while Trump has made statements about EU regulators — the White House last month criticized recent fines against Meta and Apple as a “novel form of economic extortion” — but has done little to get the EU to halt its actions against the companies.
    Microsoft takes on Trump
    Meanwhile, Microsoft not only won’t valorize Trump, it’s also pushing back against him.
    The company has publicly supported its DEI efforts rather than killing them. In December, the company’s Chief Diversity Officer, Lindsay-Rae McIntyr, wrote on LinkedIn that Microsoft’s DEI efforts are vital to the company’s success: “The business case for D&I [diversity and inclusion] is not only a constant, but is stronger than ever, reinforcing our belief that a diverse and inclusive workforce is crucial for innovation and success.”
    When worries surfaced that Trump might require American tech companies to suspend their cloud operations in Europe, or turn Europeans’ data over to the federal government as part of a trade war, Microsoft Vice Chair and President Brad Smith wrote in a blog post that he won’t turn over the data or suspend European cloud operations.
    In fact, he said the company is expanding them.
    He also said he would sue the Trump administration to protect them, if necessary.  
    Two days after that, Microsoft dropped the big law firm of Simpson Thacher & Bartlett, which had agreed to give the administration $125 million in free legal work after threats from Trump.
    Microsoft hired Jenner & Block to take its place — and Jenner & Block sued the Trump administration instead of giving in to it.
    Microsoft becomes the world’s most valuable company
    You might expect that after all that, Trump would have publicly attacked Microsoft or used the power of his office to go after the company.
    So far, that hasn’t happened.
    And unlike Meta, Apple, Google, and Amazon, Microsoft has thrived since Trump took office. 
    Its stock price inched up from $434 a share just before Trump’s inauguration past $445 this week, while the share prices of the others have all declined, sometimes significantly.
    Along the way, Microsoft became the world’s most valuable company, with a market cap approaching $3.3 trillion. 
    Gauging by the company’s most recent quarterly results, even better times may be ahead. The New York Times had this to say about the results: “Overall, Microsoft’s results showed unexpected strength in its business.
    Sales surpassed $70 billion, up 13% from the same period a year earlier.
    Profit rose to $25.8 billion, up 18%.
    The results far exceeded Wall Street’s expectations.
    Despite the economic uncertainty, the company predicted more strength ahead, saying revenue would surpass $73 billion in the current quarter.”
    Not out of the woods yet
    All that said, Microsoft is being investigated by the feds for possible antitrust violations having to do with AI and cloud computing.
    That investigation, like the others, wasn’t begun by the Trump administration; it was set in motion during the Biden administration.
    So far, Microsoft’s actions don’t appear to have had any effect on the suit — or the company.
    Microsoft CEO Satya Nadella has shown that it’s possible for a company to maintain its values under Trump and thrive.
    Amazon, Apple, Google, and Meta should follow suit.

    Source: https://www.computerworld.com/article/3983406/microsoft-hasnt-bowed-to-trump-and-the-company-is-thriving.html">https://www.computerworld.com/article/3983406/microsoft-hasnt-bowed-to-trump-and-the-company-is-thriving.html">https://www.computerworld.com/article/3983406/microsoft-hasnt-bowed-to-trump-and-the-company-is-thriving.html
    #microsoft #hasnt #bowed #trump #and #the #company #thriving
    Microsoft hasn’t bowed to Trump — and the company is thriving
    Ever since Donald J. Trump was re-elected president, we’ve witnessed a disheartening spectacle: big tech companies bending their knees to him, hoping to get him to kill antitrust actions against them and defend them from European Union rules and fines.  Meta founder and CEO Mark Zuckerberg, Amazon founder and Executive Chairman Jeff Bezos, Google CEO Sundar Pichai, and Apple CEO Tim Cook have all in one way or another shown or declared support for the president’s agenda, especially his opposition to DEI (diversity, equity and inclusion) programs. Notably, all four attended his Jan. 20 inauguration and were front and center during the proceedings.  Zuckerberg killed DEI efforts at Meta, abandoned attempts to contain misinformation on his services, makes regular pilgrimages to Mar-a-Lago, and called then-candidate Trump a “badass” after last year’s assassination attempt. He sounded like nothing so much as Trump himself (while using words of more than one syllable) when he told Joe Rogan on a podcast: “The corporate world is pretty culturally neutered. A culture that celebrates aggression a bit more has its own merits. Masculine energy, I think, is good.” Bezos killed DEI at Amazon. As the owner of The Washington Post he also squashed the newspaper’s endorsement of then-Vice President Kamala Harris last fall, killed a cartoon of tech leaders and Mickey Mouse bowing down to Trump, and ruled that the paper’s editorial and opinion pages will become right-wing, covering only “personal liberty and free markets,” with no opposing viewpoints allowed. Pichai killed off DEI efforts at Google and makes regular visits to see Trump in Florida. Cook is a bit of an outlier — although he attended the inauguration, he didn’t kill DEI at Apple and has made noises about working with Trump on tariff issues. The four companies haven’t gotten anything (yet) for their efforts; legal action against them begun under Trump’s predecessor are proceeding. Google faces being broken up after a judge ruled it illegally monopolized the advertising tech market. Meta is being prosecuted for illegally monopolizing the social media market by buying Instagram and What’s App and could be broken up as well. Amazon has been charged by the FTC with protecting its online retail monopoly by imposing fees on third-party sellers and favoring its own services over theirs. Apple has been sued by the Department of Justice for a variety of antitrust actions in protecting and extending its monopoly in the smartphone market. And while Trump has made statements about EU regulators — the White House last month criticized recent fines against Meta and Apple as a “novel form of economic extortion” — but has done little to get the EU to halt its actions against the companies. Microsoft takes on Trump Meanwhile, Microsoft not only won’t valorize Trump, it’s also pushing back against him. The company has publicly supported its DEI efforts rather than killing them. In December, the company’s Chief Diversity Officer, Lindsay-Rae McIntyr, wrote on LinkedIn that Microsoft’s DEI efforts are vital to the company’s success: “The business case for D&I [diversity and inclusion] is not only a constant, but is stronger than ever, reinforcing our belief that a diverse and inclusive workforce is crucial for innovation and success.” When worries surfaced that Trump might require American tech companies to suspend their cloud operations in Europe, or turn Europeans’ data over to the federal government as part of a trade war, Microsoft Vice Chair and President Brad Smith wrote in a blog post that he won’t turn over the data or suspend European cloud operations. In fact, he said the company is expanding them. He also said he would sue the Trump administration to protect them, if necessary.   Two days after that, Microsoft dropped the big law firm of Simpson Thacher & Bartlett, which had agreed to give the administration $125 million in free legal work after threats from Trump. Microsoft hired Jenner & Block to take its place — and Jenner & Block sued the Trump administration instead of giving in to it. Microsoft becomes the world’s most valuable company You might expect that after all that, Trump would have publicly attacked Microsoft or used the power of his office to go after the company. So far, that hasn’t happened. And unlike Meta, Apple, Google, and Amazon, Microsoft has thrived since Trump took office.  Its stock price inched up from $434 a share just before Trump’s inauguration past $445 this week, while the share prices of the others have all declined, sometimes significantly. Along the way, Microsoft became the world’s most valuable company, with a market cap approaching $3.3 trillion.  Gauging by the company’s most recent quarterly results, even better times may be ahead. The New York Times had this to say about the results: “Overall, Microsoft’s results showed unexpected strength in its business. Sales surpassed $70 billion, up 13% from the same period a year earlier. Profit rose to $25.8 billion, up 18%. The results far exceeded Wall Street’s expectations. Despite the economic uncertainty, the company predicted more strength ahead, saying revenue would surpass $73 billion in the current quarter.” Not out of the woods yet All that said, Microsoft is being investigated by the feds for possible antitrust violations having to do with AI and cloud computing. That investigation, like the others, wasn’t begun by the Trump administration; it was set in motion during the Biden administration. So far, Microsoft’s actions don’t appear to have had any effect on the suit — or the company. Microsoft CEO Satya Nadella has shown that it’s possible for a company to maintain its values under Trump and thrive. Amazon, Apple, Google, and Meta should follow suit. Source: https://www.computerworld.com/article/3983406/microsoft-hasnt-bowed-to-trump-and-the-company-is-thriving.html #microsoft #hasnt #bowed #trump #and #the #company #thriving
    WWW.COMPUTERWORLD.COM
    Microsoft hasn’t bowed to Trump — and the company is thriving
    Ever since Donald J. Trump was re-elected president, we’ve witnessed a disheartening spectacle: big tech companies bending their knees to him, hoping to get him to kill antitrust actions against them and defend them from European Union rules and fines.  Meta founder and CEO Mark Zuckerberg, Amazon founder and Executive Chairman Jeff Bezos, Google CEO Sundar Pichai, and Apple CEO Tim Cook have all in one way or another shown or declared support for the president’s agenda, especially his opposition to DEI (diversity, equity and inclusion) programs. Notably, all four attended his Jan. 20 inauguration and were front and center during the proceedings.  Zuckerberg killed DEI efforts at Meta, abandoned attempts to contain misinformation on his services, makes regular pilgrimages to Mar-a-Lago, and called then-candidate Trump a “badass” after last year’s assassination attempt. He sounded like nothing so much as Trump himself (while using words of more than one syllable) when he told Joe Rogan on a podcast: “The corporate world is pretty culturally neutered. A culture that celebrates aggression a bit more has its own merits. Masculine energy, I think, is good.” Bezos killed DEI at Amazon. As the owner of The Washington Post he also squashed the newspaper’s endorsement of then-Vice President Kamala Harris last fall, killed a cartoon of tech leaders and Mickey Mouse bowing down to Trump, and ruled that the paper’s editorial and opinion pages will become right-wing, covering only “personal liberty and free markets,” with no opposing viewpoints allowed. Pichai killed off DEI efforts at Google and makes regular visits to see Trump in Florida. Cook is a bit of an outlier — although he attended the inauguration, he didn’t kill DEI at Apple and has made noises about working with Trump on tariff issues. The four companies haven’t gotten anything (yet) for their efforts; legal action against them begun under Trump’s predecessor are proceeding. Google faces being broken up after a judge ruled it illegally monopolized the advertising tech market. Meta is being prosecuted for illegally monopolizing the social media market by buying Instagram and What’s App and could be broken up as well. Amazon has been charged by the FTC with protecting its online retail monopoly by imposing fees on third-party sellers and favoring its own services over theirs. Apple has been sued by the Department of Justice for a variety of antitrust actions in protecting and extending its monopoly in the smartphone market. And while Trump has made statements about EU regulators — the White House last month criticized recent fines against Meta and Apple as a “novel form of economic extortion” — but has done little to get the EU to halt its actions against the companies. Microsoft takes on Trump Meanwhile, Microsoft not only won’t valorize Trump, it’s also pushing back against him. The company has publicly supported its DEI efforts rather than killing them. In December, the company’s Chief Diversity Officer, Lindsay-Rae McIntyr, wrote on LinkedIn that Microsoft’s DEI efforts are vital to the company’s success: “The business case for D&I [diversity and inclusion] is not only a constant, but is stronger than ever, reinforcing our belief that a diverse and inclusive workforce is crucial for innovation and success.” When worries surfaced that Trump might require American tech companies to suspend their cloud operations in Europe, or turn Europeans’ data over to the federal government as part of a trade war, Microsoft Vice Chair and President Brad Smith wrote in a blog post that he won’t turn over the data or suspend European cloud operations. In fact, he said the company is expanding them. He also said he would sue the Trump administration to protect them, if necessary.   Two days after that, Microsoft dropped the big law firm of Simpson Thacher & Bartlett, which had agreed to give the administration $125 million in free legal work after threats from Trump. Microsoft hired Jenner & Block to take its place — and Jenner & Block sued the Trump administration instead of giving in to it. Microsoft becomes the world’s most valuable company You might expect that after all that, Trump would have publicly attacked Microsoft or used the power of his office to go after the company. So far, that hasn’t happened. And unlike Meta, Apple, Google, and Amazon, Microsoft has thrived since Trump took office.  Its stock price inched up from $434 a share just before Trump’s inauguration past $445 this week, while the share prices of the others have all declined, sometimes significantly. Along the way, Microsoft became the world’s most valuable company, with a market cap approaching $3.3 trillion.  Gauging by the company’s most recent quarterly results, even better times may be ahead. The New York Times had this to say about the results: “Overall, Microsoft’s results showed unexpected strength in its business. Sales surpassed $70 billion, up 13% from the same period a year earlier. Profit rose to $25.8 billion, up 18%. The results far exceeded Wall Street’s expectations. Despite the economic uncertainty, the company predicted more strength ahead, saying revenue would surpass $73 billion in the current quarter.” Not out of the woods yet All that said, Microsoft is being investigated by the feds for possible antitrust violations having to do with AI and cloud computing. That investigation, like the others, wasn’t begun by the Trump administration; it was set in motion during the Biden administration. So far, Microsoft’s actions don’t appear to have had any effect on the suit — or the company. Microsoft CEO Satya Nadella has shown that it’s possible for a company to maintain its values under Trump and thrive. Amazon, Apple, Google, and Meta should follow suit.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com