Google Alters AI Ethics Policy, Removes Pledge to Avoid Weapons and Surveillance Use

Google updated its guiding principles on artificial intelligence to remove prior pledges to never use AI in weapons or in mass surveillance. The policy, announced Tuesday, marks a shift in how the technology giant approaches AI ethics and national security.

This comes weeks after Google’s chief executive, Sundar Pichai, attended the inauguration of US President Donald Trump in what was considered a pivotal moment in AI governance and corporate responsibility.

Google Calls for AI Leadership in Democratic Nations


Google Alters AI Ethics Policy, Removes Pledge to Avoid Weapons

Michael M. Santiago/Getty Images

According to Fortune’s report, Google underlined how AI should first be developed and mastered by democratic nations that value freedom, equality, and human rights.

The blog, written by DeepMind chief Demis Hassabis and Google’s senior vice-president of research James Manyika, also highlighted the need for global collaboration among governments, companies, and institutions in making sure AI is used responsibly.

Revised guidelines advocate for the role of AI in national security, economic growth, and public safety placing Google’s vision in line with larger geopolitical interests. However, the lack of explicit prohibition of AI-powered weapons and surveillance tools raised ethical considerations.

Read more:
OpenAI Caught Using Reddit to Train AI—Is Your Opinion Being Tested?

The Great Debate Over Ethical Limits of AI

The rapid evolution of AI has raised an increasingly heated debate over its ethical implications, from its use in warfare to surveillance. While some critics fear the predominance of corporate interests over ethics, proponents say AI-driven national security projects are a must in today’s world of relentless competition.

Deep investments by Google into the AI infrastructure point out the growing commitment of the company to the segment. The strategic focus of the company on the expansion of AI is emphasized by embedding the AI-powered search platform Gemini into Google Search, integrated into Pixel devices.

Despite its changing business ethos, the search engine giant has also witnessed rebellion from its workforce over AI ethics. In 2018, the company jettisoned a US Department of Defense contract for “Project Maven” after protests by thousands of employees. The workers were afraid the project’s AI-driven technology to analyze drone footage could help lethal military actions.

Google’s Shift Follows Trump’s AI Policy Changes

That makes these new principles a departure from its 2018 pledge that it would not develop artificial intelligence for the class of weapons systems or indefinite intrusive surveillance.

Pichai had then assured employees and the public about their commitment to ensuring ethical parameters were upheld. These are words absent now from the guidelines.

The timing of this policy shift comes as the Trump administration rolled back a Biden-imposed executive order forcing the adoption of safety measures with AI. The rescinded policy had called for companies to disclose the risks of AI regarding national security and public safety.

This laxity in regulatory oversight opens space for AI firms to improve their AI technologies without the shackles of previous ethical commitments. Google maintains that it will issue regular annual reports on its progress with AI, in an effort at reassurance of its stand on transparency despite the controversial policy shift.

It’s now harder to remove AI from our lives since more tech firms are looking forward to incorporating it into their business. The responsible use of AI should be practiced despite every convenience that the world has to offer.

Google appears, with its revised AI principles, to prepare for a more active role in national security and defense-related applications of AI.

As the company insists that the work on AI is being done keeping democratic values in mind, critics say removing guardrails could open a pathway toward increased militarization of AI and government surveillance.

Related Article:
Google Joins the Vision-Language Model with PaliGemma 2, But How Will It Help its AI Charge?

Rate article
Add a comment