Source URL: https://simonwillison.net/2025/Jul/23/icml-2025/#atom-everything
Source: Simon Willison’s Weblog
Title: Quoting ICML 2025
Feedly Summary: Submitting a paper with a “hidden" prompt is scientific misconduct if that prompt is intended to obtain a favorable review from an LLM. The inclusion of such a prompt is an attempt to subvert the peer-review process. Although ICML 2025 reviewers are forbidden from using LLMs to produce their reviews of paper submissions, this fact does not excuse the attempted subversion. (For an analogous example, consider that an author who tries to bribe a reviewer for a favorable review is engaging in misconduct even though the reviewer is not supposed to accept bribes.) Note that this use of hidden prompts is distinct from those intended to detect if LLMs are being used by reviewers; the latter is an acceptable use of hidden prompts.
— ICML 2025, Statement about subversive hidden LLM prompts
Tags: ai-ethics, prompt-injection, generative-ai, ai, llms
AI Summary and Description: Yes
Summary: The text highlights the ethical implications of using hidden prompts to influence peer reviews in the context of LLMs (Large Language Models) and outlines the stance of ICML 2025 on this practice. It warns against such misconduct, emphasizing that even though reviewers are prohibited from using LLMs for evaluations, attempts to manipulate the system through hidden prompts are unacceptable.
Detailed Description: The provided text discusses a critical topic in the intersection of AI ethics and academic integrity, particularly related to AI-generated content and peer-review processes. The main points conveyed include:
– **Ethical Misconduct**: The submission of papers with hidden prompts to manipulate reviewers undermines the integrity of the scientific review process, similar to bribery.
– **Focus on LLMs**: The text specifically addresses the use of LLMs in this context, indicating a growing awareness of the ethical challenges posed by AI technologies in academia.
– **Prohibition of LLM Use in Reviews**: ICML 2025 has explicitly forbidden reviewers from using LLMs to assist in their evaluations, reinforcing the commitment to maintaining rigorous academic standards.
– **Distinction of Hidden Prompts**: The text clarifies that while hidden prompts intended to manipulate reviewers are unethical, those designed to identify LLM use by reviewers are considered permissible.
This summary is particularly relevant for professionals in the fields of AI ethics, compliance, and academic integrity, as it underscores the significance of maintaining ethical standards while navigating the complexities introduced by advanced AI systems.
– **Key Takeaways**:
– Importance of ethical practices in AI-related academic submissions.
– Ongoing conversation about responsible use of LLMs in academia.
– The need for transparency and integrity in the peer-review process.
Understanding these dynamics is essential for ensuring compliance with emerging regulations and for fostering trust in AI technologies and methodologies in academic settings.