Source URL: https://algorithmwatch.org/en/input-eu-systemic-risks-stemming/
Source: AlgorithmWatch
Title: “Risks come not just from technology” – Input to the EU on systemic risks stemming
Feedly Summary: AlgorithmWatch has submitted expert input to the EU on systemic risks stemming from online platforms and search engines. We argued risks come not just from technology, but also (1) the attitudes of companies and (2) lack of transparency around enforcement. This input was at the invite of the European Board of Digital Services and the EU Commission, to help prepare their first report on systemic risks under the Digital Services Act. Read our full response here:
AI Summary and Description: Yes
Summary: The text discusses research findings regarding the Digital Services Act (DSA) and systemic risks associated with Large Language Models (LLMs), emphasizing the importance of transparency, engagement, and evidence collection from diverse stakeholders. It raises concerns about compliance, the readiness of tech companies to address systemic risks, and calls for improved mechanisms for researchers to contribute effectively to enforcement under the DSA.
Detailed Description: The content primarily addresses the implications of the Digital Services Act (DSA) on systemic risks related to Large Language Models (LLMs) and emphasizes the need for transparency and engagement from tech companies. Key points include:
* **Research Findings**:
– AlgorithmWatch explored risks presented by LLMs and social media platforms, highlighting the inadequate responses from tech companies in mitigating these risks.
– Engagement with researchers reveals an urgent need for better understanding systemic risks under the DSA.
* **Concerns with Tech Companies**:
– Increasing integration of AI-generated content, like summaries from LLMs, into platforms may introduce risks that need to be addressed.
– Large tech firms exhibit a reluctance to fully comply with DSA regulations, often interpreting them as a “bare minimum” rather than a comprehensive approach to accountability.
* **Importance of Transparency**:
– The DSA aims to create an accountable online space, allowing for the identification and minimization of societal risks. Clear guidelines from the European Commission could enhance compliance but may also lead companies to aim for only minimal adherence.
* **Research Participation**:
– The European Commission’s initiative to include diverse researchers and gather evidence is commendable but lacks clarity on how this information feeds into decision-making and enforcement.
– The disparity in comfort levels between academic researchers and civil society organizations (CSOs) in engaging with the Commission needs to be addressed to ensure all relevant risks are identified and evaluated.
* **Engagement Mechanisms**:
– Proposed methods for enhancing participation in DSA-related research include dialogic formats, vetting of participants for sensitive information, and open calls for expert opinions.
– There’s a recognition that while transparency is essential, excessive transparency could also pose risks.
* **Final Aim**:
– Highlighting the DSA’s potential, the text emphasizes the necessity for a more structured procedure for utilizing research in enforcement to promote a safer online environment for EU citizens.
Overall, this text is crucial for security and compliance professionals, particularly those involved in AI governance, as it elucidates both the challenges and opportunities presented by emerging digital frameworks and technologies.