Simon Willison’s Weblog: Quoting Daniel Stenberg

Source URL: https://simonwillison.net/2025/May/6/daniel-stenberg/#atom-everything
Source: Simon Willison’s Weblog
Title: Quoting Daniel Stenberg

Feedly Summary: That’s it. I’ve had it. I’m putting my foot down on this craziness.
1. Every reporter submitting security reports on #Hackerone for #curl now needs to answer this question:
“Did you use an AI to find the problem or generate this submission?"
(and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions)
2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time.
We still have not seen a single valid security report done with AI help.
— Daniel Stenberg
Tags: ai, llms, ai-ethics, daniel-stenberg, slop, security, curl, generative-ai

AI Summary and Description: Yes

Summary: The text discusses a strict new policy regarding the submission of security reports related to the curl project on HackerOne. It emphasizes the challenges posed by AI-generated submissions, leading to an outright ban on reports considered to lack quality. This indicates significant implications for security protocols and the integrity of AI-assisted research.

Detailed Description: The commentary by Daniel Stenberg reveals a growing frustration within the security community, particularly regarding the reliability of AI-generated reports in vulnerability submissions. Key points include:

– **New Submission Requirement**: Reporters must explicitly disclose whether they used AI to identify issues or create their submissions.
– **Quality Control Measures**: Immediate bans will be placed on submissions judged to be of poor quality, characterized as “AI slop,” suggesting a zero-tolerance policy for non-conventional reporting.
– **Operational Impact**: The mention of being “effectively DDoSed” implies that the volume of subpar reports has overwhelmed the capacity to manage legitimate concerns, highlighting challenges in maintaining quality oversight in security reporting.
– **Call for Evidence**: There is a demand for real evidence of intelligence in submissions, signaling a need to ensure that AI tools enhance rather than detract from the quality of security assessments.

This commentary underscores the pressing need for security professionals to balance the adoption of AI technologies with stringent quality checks, particularly when it comes to vulnerability reporting. It also raises questions about the evolving role of AI in cybersecurity and the ethical considerations in leveraging such technologies for critical tasks, which is a key topic in AI ethics and security evaluation.