Source URL: https://simonwillison.net/2025/Apr/20/llm-fragments-github/#atom-everything
Source: Simon Willison’s Weblog
Title: llm-fragments-github 0.2
Feedly Summary: llm-fragments-github 0.2
I upgraded my llm-fragments-github plugin to add a new fragment type called issue. It lets you pull the entire content of a GitHub issue thread into your prompt as a concatenated Markdown file.
(If you haven’t seen fragments before I introduced them in Long context support in LLM 0.24 using fragments and template plugins.)
I used it just now to have Gemini 2.5 Pro provide feedback and attempt an implementation of a complex issue against my LLM project:
llm install llm-fragments-github
llm -f github:simonw/llm \
-f issue:simonw/llm/938 \
-m gemini-2.5-pro-exp-03-25 \
–system ‘muse on this issue, then propose a whole bunch of code to help implement it’
Here I’m loading the FULL content of the simonw/llm repo using that -f github:simonw/llm fragment (documented here), then loading all of the comments from issue 938 where I discuss quite a complex potential refactoring. I ask Gemini 2.5 Pro to “muse on this issue" and come up with some code.
This worked shockingly well. Here’s the full response, which highlighted a few things I hadn’t considered yet (such as the need to migrate old database records to the new tree hierarchy) and then spat out a whole bunch of code which looks like a solid start to the actual implementation work I need to do.
I ran this against Google’s free Gemini 2.5 Preview, but if I’d used the paid model it would have cost me 202,680 input tokens and 10,460 output tokens for a total of 66.36 cents.
As a fun extra, the new issue: feature itself was written almost entirely by OpenAI o3, again using fragments. I ran this:
llm -m openai/o3 \
-f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
-f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
-s ‘Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment’
Here I’m using the ability to pass a URL to -f and giving it the full source of my llm_hacker_news.py plugin (which shows how a fragment can load data from an API) plus the HTML source of my github-issue-to-markdown tool (which I wrote a few months ago with Claude). I effectively asked o3 to take that HTML/JavaScript tool and port it to Python to work with my fragments plugin mechanism.
o3 provided almost the exact implementation I needed, and even implemented GITHUB_TOKEN environment variable without me thinking to ask for it. Total cost: 19.928 cents.
On a final note of curiosity I tried running this prompt against Gemma 3 27B QAT running on my Mac via MLX and llm-mlx:
llm install llm-mlx
llm mlx download-model mlx-community/gemma-3-27b-it-qat-4bit
llm -m mlx-community/gemma-3-27b-it-qat-4bit \
-f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
-f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
-s ‘Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment’
That worked pretty well too. It turns out a 16GB local model file is powerful enough to write me an LLM plugin now!
Tags: gemini, llm, ai-assisted-programming, generative-ai, o3, ai, llms, plugins, github, mlx, gemma, long-context
AI Summary and Description: Yes
Summary: The text discusses the upgrade of a GitHub plugin for long-context support in LLM (Large Language Model) projects, detailing the implementation of an issue-fragment type that enhances code feedback and generation via AI models like Gemini 2.5 and OpenAI’s o3. This showcases advances in AI-assisted programming, which hold significant implications for developers in the realm of cloud and infrastructure security.
Detailed Description:
The content provides a detailed overview of enhancements made to the llm-fragments-github plugin enabling interaction with GitHub issue threads, allowing users to extract comprehensive data to assist in AI-driven code generation and feedback. It emphasizes the use of advanced AI models for software development, showcasing practical applications and implications for security and compliance professionals:
– **New Fragment Type**: The introduction of an issue fragment type allows developers to pull entire GitHub issue threads, which can facilitate better communication and understanding of the code and potential issues during development.
– **AI Integration**: The use of Gemini 2.5 Pro and OpenAI’s o3 for generating code solutions highlights the increasing reliance on AI models for software engineering tasks. Such generative AI capabilities can streamline the development process, ensuring that developers receive timely and contextually relevant code suggestions.
– **Cost Analysis**: The text includes a cost breakdown related to the usage of AI models based on token consumption, which is crucial for users planning budgetary allocations for AI services. Cost management plays a vital role in aligning AI development with organizational security and compliance budgets.
– **Flexibility and Reach**: By demonstrating functionalities through various AI models and showcasing the seamless integration with GitHub APIs, the text highlights the versatility and practical applications of LLMs in programming.
– **Local Model Considerations**: The exploration of running an AI model locally on personal hardware reflects a growing trend towards decentralized AI use, which raises important considerations around data privacy, security, and control over proprietary codebases.
– **Final Thoughts**: The adaptability of AI tools in real-world applications emphasizes their increasing role in development workflows, while also paving the way for organizations to implement and secure their AI tools effectively.
Overall, this text’s insights make it particularly relevant for security professionals as it showcases the ongoing integration of AI technologies in software development and the implications for access control, compliance, and operational governance in dynamic development environments.