Source URL: https://simonwillison.net/2025/Apr/23/cheating/#atom-everything
Source: Simon Willison’s Weblog
Title: A trick to feel less like cheating when you use LLMs
Feedly Summary: An underestimated challenge in making productive use of LLMs is that it can feel like cheating.
One trick I’ve found that helps is to make sure that I am putting in way more text than the LLM is spitting out .
This goes for code: I’ll pipe in a previous project for it to modify, or ask it to combine two, or paste in my research notes.
It also goes for writing. I hardly ever publish material that was written by an LLM, but I feel least icky about content where I had an extensive voice conversation with the model and then asked it to turn that into notes.
I have a hunch that overcoming the feeling of guilt associated with using LLMs is one of the most important skills required to make effective use of them!
My gold standard for LLM usage remains this: would I be proud to stake my own credibility on the quality of the end result?
Related, this excellent advice from Laurie Voss:
Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.
Tags: ai-ethics, llms, ai, generative-ai
AI Summary and Description: Yes
Summary: The text discusses the ethical considerations and practical strategies for effectively using large language models (LLMs) in various creative and coding contexts. It emphasizes the importance of personal integrity and satisfaction when utilizing LLMs, highlighting key insights on how to make productive use of these AI tools.
Detailed Description: The content primarily revolves around the challenges and strategies associated with leveraging large language models (LLMs) effectively. It underscores the need for users to feel comfortable and ethical when using these AI tools. The discourse includes several important points:
– **Perceived Ethical Dilemma**: The author acknowledges a common feeling of guilt or “icky” feeling when relying on LLMs for output, particularly in creative processes.
– **Effective Usage Strategy**: One method suggested for overcoming this feeling is to input a significantly larger amount of text than the model generates. This approach can help maintain a sense of authenticity and personal contribution.
– **Application in Coding and Writing**:
– For coding, users are encouraged to provide previous projects for modification or to combine multiple ideas.
– In writing, conversations with the model can aid in creating original content that feels less like simply rehashing AI output.
– **Gold Standard for Usage**: The author sets a personal benchmark for evaluating LLM-generated content: the determination of whether one would feel proud to claim the quality as their own.
– **Pragmatic Advice from Experts**: The text references advice from Laurie Voss about the effectiveness of LLMs based on the amount of text involved in a task. Key insights include:
– Converting a large amount of text into a smaller form generally yields better results.
– Asking the model to generate content equal to or greater than the input often results in subpar output.
In summary, the text provides meaningful reflections on the effective use of LLMs, encouraging users to blend their creativity with AI systematically and ethically, while also sharing practical strategies and insights from existing expertise in the field. This is directly relevant to professionals involved in AI’s ethical use, information security, and generative AI security, as it touches upon considerations about accountability, quality, and effective engagement with AI technologies.