Microsoft has acknowledged that it provided advanced AI tools and cloud computing services to the Israeli military during its ongoing military operations in Gaza. The tech giant said the tools were intended to assist in locating and rescuing Israeli hostages.

In a blog post, Microsoft revealed that it offered Azure cloud storage, AI-powered translation services, and professional support. The company claimed it carefully evaluated each request, approving some while rejecting others, to balance saving lives with protecting civilian rights and privacy.

This admission follows an Associated Press investigation uncovering Microsoft’s ties to Israel’s Defence Ministry. The report suggested Azure services helped process intelligence gathered via surveillance—data potentially used in AI-assisted targeting systems.

The disclosure has drawn criticism from human rights advocates who warn that AI can misidentify targets, resulting in civilian casualties. Microsoft, responding to mounting internal and external pressure, has initiated an internal audit and brought in an independent firm to assess its role—but has refused to share deeper details.

Though Microsoft stated it found no evidence that its AI directly contributed to civilian harm, it also conceded it cannot monitor how its software is used once deployed by clients on their systems.

Experts say this marks a pivotal shift—where a tech firm sets ethical boundaries for a national military. “It’s astonishing,” said Emelia Probasco of Georgetown, “that a company is dictating use policies in wartime.”

As the Gaza death toll exceeds 50,000, including thousands of women and children, the role of corporate AI in warfare is under renewed scrutiny.

#TechInWarfare #MicrosoftAI #GazaConflict #HumanRightsConcerns