Wired: Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Source URL: https://www.wired.com/story/google-responsible-ai-principles/
Source: Wired
Title: Google Lifts a Ban on Using Its AI for Weapons and Surveillance

Feedly Summary: Google published principals in 2018 barring its AI technology from being used for sensitive purposes. Weeks into President Donald Trump’s second term, those guidelines are being overhauled.

AI Summary and Description: Yes

**Summary:** Google has revised its AI principles to allow for broader usage of artificial intelligence technologies. The new guidelines remove previous commitments against developing harmful technologies and emphasize a more flexible approach that includes human oversight and alignment with legal and human rights standards. This shift reflects the evolving landscape of AI usage and international governance.

**Detailed Description:**
Google’s recent overhaul of its AI principles marks a significant shift in its approach to artificial intelligence and advanced technologies. The company has removed commitments that previously restricted its use of potentially harmful technologies and broadened its scope for research and development. Key points of this content include:

– **Changes to AI Principles:**
– The old principles included promises not to engage in projects that could cause harm, develop weapons, or utilize surveillance methods that violate international norms.
– These commitments have been removed, offering the company more latitude in exploring sensitive AI applications.

– **Context for the Changes:**
– The revisions come against a backdrop of increased AI utilization, evolving standards, and geopolitical competition.
– Google initially introduced these principles in 2018 amid internal protests regarding its involvement in a U.S. military project.

– **New Commitments:**
– While the banned uses have been lifted, the updated principles now emphasize the importance of human oversight and due diligence.
– Google expresses an intention to align its initiatives with recognized human rights and legal principles.

– **Statements from Leadership:**
– Executives stressed the necessity of democratic leadership in AI development, highlighting core values such as freedom and equality.
– The company aims to collaborate with other organizations that share these values to create AI technologies that support global growth and national security.

– **Goals for Future AI Initiatives:**
– The company has set bold aims for responsible and collaborative AI efforts.
– New aspects of the principles include a focus on respecting intellectual property rights, moving away from earlier commitments to scientific excellence and social benefit.

This transition in Google’s AI governance reflects broader trends in the tech industry as companies navigate growing pressure for equitable practices while also pursuing innovative technologies. Security and compliance professionals should pay close attention to these shifts, as they may influence regulatory perspectives and standards regarding AI use in both the private and public sectors.