Simon Willison’s Weblog: Quoting Django’s security policies

Source URL: https://simonwillison.net/2025/Jul/11/django-security-policies/#atom-everything
Source: Simon Willison’s Weblog
Title: Quoting Django’s security policies

Feedly Summary: Following the widespread availability of large language models (LLMs), the Django Security Team has received a growing number of security reports generated partially or entirely using such tools. Many of these contain inaccurate, misleading, or fictitious content. While AI tools can help draft or analyze reports, they must not replace human understanding and review.
If you use AI tools to help prepare a report, you must:

Disclose which AI tools were used and specify what they were used for (analysis, writing the description, writing the exploit, etc).
Verify that the issue describes a real, reproducible vulnerability that otherwise meets these reporting guidelines.
Avoid fabricated code, placeholder text, or references to non-existent Django features.

Reports that appear to be unverified AI output will be closed without response. Repeated low-quality submissions may result in a ban from future reporting
— Django’s security policies, AI-Assisted Reports
Tags: ai-ethics, open-source, security, generative-ai, ai, django, llms

AI Summary and Description: Yes

Summary: The text underscores the challenges posed by the use of large language models (LLMs) in generating security reports, highlighting the risk of inaccuracies and fictitious content. It emphasizes the necessity for human oversight in validating AI-generated content and sets forth guidelines for proper disclosures and verifications, crucial for security professionals leveraging AI tools.

Detailed Description:
The passage addresses the implications of using large language models (LLMs) in generating security reports within the context of the Django Security Team. It raises significant concerns regarding the reliability of AI-generated reports and outlines essential practices for ensuring quality and integrity in submissions. Key points include:

– **Increase in AI-Generated Reports**: The Django Security Team is experiencing a rise in security reports generated with the assistance of LLMs.
– **Issues with AI-Generated Content**: Many reports contain inaccuracies, misleading information, or entirely fabricated elements, raising concerns about their credibility.
– **Importance of Human Oversight**: It is reiterated that while AI can facilitate drafting and analysis, human expertise is critical in verifying and validating the information before submission.
– **Guidelines for Submissions**:
– Disclose the AI tools utilized, clarifying their specific applications (e.g., analysis, writing descriptions, detailing exploits).
– Ensure that reported issues involve real, reproducible vulnerabilities adhering to reporting guidelines.
– Avoid submitting reports containing invented code, placeholder text, or references to non-existent features of Django.
– **Consequences of Non-compliance**: Reports lacking verification or appearing unverified as AI output will be disregarded, and persistent low-quality submissions may lead to future reporting bans.

In summary, the text serves as a cautionary guideline for security professionals who leverage AI tools, highlighting the importance of maintaining rigorous standards in vulnerability reporting to uphold the integrity of security practices.