Microsoft: Our Tech Isn’t Being Used to Hurt Civilians in Gaza
Microsoft claims it has found “no evidence” that its AI technologies or its cloud computing service Microsoft Azure have been used to target or harm civilians during the ongoing conflict in Gaza.In an official statement, the tech giant says it conducted an internal review into the issue and engaged an external firmto undertake additional fact-finding. Microsoft says the review process included interviewing dozens of employees and assessing military documents.Microsoft did confirm that it provides Israel's Ministry of Defensewith software, professional services, Azure cloud services, and Azure AI services such as language translation, as well as cybersecurity support, but denied these technologies were being used to target civilians.However, Microsoft pointed out that it “does not have visibility into how customers use our software on their own servers or other devices,” and that it does not have “visibility into the IMOD’s government cloud operations,” which use other providers. “By definition, our reviews do not cover these situations,” said a Microsoft spokesperson.Recommended by Our EditorsThe statement is unlikely to silence Microsoft’s harshest critics on the issue. In May, the company axed two employees who disrupted its 50th-anniversary event to protest the use of its tech by Israel.Meanwhile, investigations by outlets like The Associated Press have alleged that commercially available AI models produced by Microsoft and OpenAI were used to select bombing targets in Gaza and Lebanon. The report noted that the Israeli military’s usage of Microsoft and OpenAI artificial intelligence in March 2024 was nearly 200 times higher than before the Oct. 7 attack, citing internal company information shared with AP. Hossam Nasr, an organizer of No Azure for Apartheid, criticized the validity of Microsoft’s statement in an interview with GeekWire earlier this week, saying it was “filled with both lies and contradictions.” Nasr, a former Microsoft employee, said the company claims “that their technology is not being used to harm people in Gaza,” but highlighted its admission that “they don’t have insight into how their technologies are being used.”Microsoft isn't the only Big Tech firm that has been contending with the allegations from staff it's supporting harm to civilians. In 2024, Google axed 28 employees who participated in an office sit-in protest against the search giant's role in Project Nimbus, a billion cloud contract between Google, Amazon, and Israel's government and military.
#microsoft #our #tech #isnt #being
Microsoft: Our Tech Isn’t Being Used to Hurt Civilians in Gaza
Microsoft claims it has found “no evidence” that its AI technologies or its cloud computing service Microsoft Azure have been used to target or harm civilians during the ongoing conflict in Gaza.In an official statement, the tech giant says it conducted an internal review into the issue and engaged an external firmto undertake additional fact-finding. Microsoft says the review process included interviewing dozens of employees and assessing military documents.Microsoft did confirm that it provides Israel's Ministry of Defensewith software, professional services, Azure cloud services, and Azure AI services such as language translation, as well as cybersecurity support, but denied these technologies were being used to target civilians.However, Microsoft pointed out that it “does not have visibility into how customers use our software on their own servers or other devices,” and that it does not have “visibility into the IMOD’s government cloud operations,” which use other providers. “By definition, our reviews do not cover these situations,” said a Microsoft spokesperson.Recommended by Our EditorsThe statement is unlikely to silence Microsoft’s harshest critics on the issue. In May, the company axed two employees who disrupted its 50th-anniversary event to protest the use of its tech by Israel.Meanwhile, investigations by outlets like The Associated Press have alleged that commercially available AI models produced by Microsoft and OpenAI were used to select bombing targets in Gaza and Lebanon. The report noted that the Israeli military’s usage of Microsoft and OpenAI artificial intelligence in March 2024 was nearly 200 times higher than before the Oct. 7 attack, citing internal company information shared with AP. Hossam Nasr, an organizer of No Azure for Apartheid, criticized the validity of Microsoft’s statement in an interview with GeekWire earlier this week, saying it was “filled with both lies and contradictions.” Nasr, a former Microsoft employee, said the company claims “that their technology is not being used to harm people in Gaza,” but highlighted its admission that “they don’t have insight into how their technologies are being used.”Microsoft isn't the only Big Tech firm that has been contending with the allegations from staff it's supporting harm to civilians. In 2024, Google axed 28 employees who participated in an office sit-in protest against the search giant's role in Project Nimbus, a billion cloud contract between Google, Amazon, and Israel's government and military.
#microsoft #our #tech #isnt #being