The Register: Fining Big Tech isn’t working. Make them give away illegally trained LLMs as public domain

Source URL: https://www.theregister.com/2024/12/22/ai_poisoned_tree/
Source: The Register
Title: Fining Big Tech isn’t working. Make them give away illegally trained LLMs as public domain

Feedly Summary: It’s all made from our data, anyway, so it should be ours to use as we want
Opinion Last year, I wrote a piece here on El Reg about being murdered by ChatGPT as an illustration of the potential harms through the misuse of large language models and other forms of AI.…

AI Summary and Description: Yes

Summary: The text critiques the current landscape of AI development, particularly relating to personal data processing and the ethical implications of large language models (LLMs). It emphasizes the urgent need for regulatory reform and accountability from technology companies, arguing for stronger legal frameworks and potential public domain solutions to address unlawful practices and protect individual rights.

Detailed Description:
The piece elaborates on several significant issues pertaining to AI and privacy:

– **Ethical Concerns**: The author reflects on ethical challenges surrounding AI, especially in how companies utilize personal data to train models like ChatGPT without proper consent.
– **Legal Perspectives**: It presents an argument advocating for stronger legal measures against the unlawful harvesting of data, drawing parallels with the legal theory of “fruit of the poisonous tree,” suggesting that models trained on illegally obtained data should face deletion.
– **Environmental Impact**: The discussion extends to the considerable environmental costs associated with training large AI models, urging a reconciliation of ethical concerns related to environmental sustainability with the legality of data sourcing.
– **Proposed Solutions**:
– **Public Domain Proposal**: The text suggests that LLMs found to be trained unlawfully should be transferred to the public domain, ensuring that no company profits from illegal activities, enhancing economic and legal accountability.
– **Incentives for Compliance**: The author calls for mechanisms that would prevent companies from profiting off illegal actions and that could further incentivize respect for privacy and intellectual property.
– **Need for Global Cooperation**: It discusses the necessity for international collaboration, similar to existing treaties, to enforce compliance effectively across borders.
– **Future Legislation**: The author emphasizes the urgent need for legislative action to develop deterrents against AI-related violations, reflecting on past mistakes in data regulation and the importance of learning from them to establish a robust legal framework.

Overall, the text articulates the complexities of the current regulatory environment surrounding AI, highlighting gaps in accountability and the pressing need for reform to safeguard individual rights and promote ethical AI practices. This could be particularly relevant for professionals in AI security, privacy, and compliance roles as they navigate the changing landscape of technological law and ethics.