Schneier on Security: Trust Issues

Source URL: https://www.schneier.com/blog/archives/2024/12/trust-issues.html
Source: Schneier on Security
Title: Trust Issues

Feedly Summary: For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he calls them, are wrong to see AI as “irreparably tainted” by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn’t need to stay that way…

AI Summary and Description: Yes

**Summary:** The text discusses the evolution of AI from its military roots to its current dominance by corporate interests, emphasizing the importance of trust and transparency in AI development. It critiques the lack of accountability in corporate AI practices while illustrating emerging models that prioritize public interest.

**Detailed Description:**

The text provides a comprehensive analysis of the trajectory of artificial intelligence (AI), touching upon its historical origins, current state, and future potential. Here are the main points:

– **Historical Context:**
– AI has a long history, tracing back to roots in linguistics and signal processing with support from military funding but has evolved significantly outside of its original context.
– Similar to the internet’s transformation from a military project to a corporate tool, AI is now largely shaped by venture capital and corporate interests.

– **Corporate Control:**
– The current landscape of AI is dominated by major companies like OpenAI, Google, and Meta, which operate in a closed environment and dictate how AI models are trained, the values they embody, and their applicability.
– The secrecy surrounding corporate AI practices raises trust concerns; users are unaware of the biases and data sourcing methods behind widely used models.

– **Challenges of Trust:**
– Trust is paramount when utilizing AI for important tasks; however, the public has limited experience navigating the complexities of potentially untrustworthy AI.
– The lack of transparency prevents users from understanding AI capabilities and the moral framework guiding its development.

– **Open Models as Alternatives:**
– Initiatives like BigScience’s BLOOM and Singapore’s SEA-LION demonstrate potential alternatives that prioritize openness and public benefit.
– These models provide a more trustworthy foundation that aligns with public interests rather than profitability.

– **Future Vision:**
– There is a possibility for AI to evolve beyond profit maximization; investing in public-interest-focused AI could provide accountability and improved societal integration.
– The text advocates for a shift towards treating AI as a public good akin to essential infrastructure, encouraging democratic governance and civil society participation in its development.

In summary, the text emphasizes the urgent need to redefine the trajectory of AI towards transparency, public accountability, and ethical governance to serve the best interests of society while challenging the prevailing corporate-driven mindset. This perspective could guide security and compliance professionals in ensuring that AI implementations are ethical and aligned with public values.