Tag: system prompts
-
Simon Willison’s Weblog: How to stop AI’s “lethal trifecta”
Source URL: https://simonwillison.net/2025/Sep/26/how-to-stop-ais-lethal-trifecta/ Source: Simon Willison’s Weblog Title: How to stop AI’s “lethal trifecta” Feedly Summary: How to stop AI’s “lethal trifecta” This is the second mention of the lethal trifecta in the Economist in just the last week! Their earlier coverage was Why AI systems may never be secure on September 22nd – I…
-
Simon Willison’s Weblog: Improved Gemini 2.5 Flash and Flash-Lite
Source URL: https://simonwillison.net/2025/Sep/25/improved-gemini-25-flash-and-flash-lite/#atom-everything Source: Simon Willison’s Weblog Title: Improved Gemini 2.5 Flash and Flash-Lite Feedly Summary: Improved Gemini 2.5 Flash and Flash-Lite Two new preview models from Google – updates to their fast and inexpensive Flash and Flash Lite families: The latest version of Gemini 2.5 Flash-Lite was trained and built based on three key…
-
Simon Willison’s Weblog: CompileBench: Can AI Compile 22-year-old Code?
Source URL: https://simonwillison.net/2025/Sep/22/compilebench/ Source: Simon Willison’s Weblog Title: CompileBench: Can AI Compile 22-year-old Code? Feedly Summary: CompileBench: Can AI Compile 22-year-old Code? Interesting new LLM benchmark from Piotr Grabowski and Piotr Migdał: how well can different models handle compilation challenges such as cross-compiling gucr for ARM64 architecture? This is one of my favorite applications of…
-
Cloud Blog: vLLM Performance Tuning: The Ultimate Guide to xPU Inference Configuration
Source URL: https://cloud.google.com/blog/topics/developers-practitioners/vllm-performance-tuning-the-ultimate-guide-to-xpu-inference-configuration/ Source: Cloud Blog Title: vLLM Performance Tuning: The Ultimate Guide to xPU Inference Configuration Feedly Summary: Additional contributors include Hossein Sarshar, Ashish Narasimham, and Chenyang Li. Large Language Models (LLMs) are revolutionizing how we interact with technology, but serving these powerful models efficiently can be a challenge. vLLM has rapidly become…
-
Simon Willison’s Weblog: too many model context protocol servers and LLM allocations on the dance floor
Source URL: https://simonwillison.net/2025/Aug/22/too-many-mcps/#atom-everything Source: Simon Willison’s Weblog Title: too many model context protocol servers and LLM allocations on the dance floor Feedly Summary: too many model context protocol servers and LLM allocations on the dance floor Useful reminder from Geoffrey Huntley of the infrequently discussed significant token cost of using MCP. Geoffrey estimate estimates that…
-
Cloud Blog: Rightsizing LLM Serving on vLLM for GPUs and TPUs
Source URL: https://cloud.google.com/blog/topics/developers-practitioners/rightsizing-llm-serving-on-vllm-for-gpus-and-tpus/ Source: Cloud Blog Title: Rightsizing LLM Serving on vLLM for GPUs and TPUs Feedly Summary: Additional contributors include Hossein Sarshar and Ashish Narasimham. Large Language Models (LLMs) are revolutionizing how we interact with technology, but serving these powerful models efficiently can be a challenge. vLLM has rapidly become the primary choice for…
-
Simon Willison’s Weblog: GPT-5 has a hidden system prompt
Source URL: https://simonwillison.net/2025/Aug/15/gpt-5-has-a-hidden-system-prompt/#atom-everything Source: Simon Willison’s Weblog Title: GPT-5 has a hidden system prompt Feedly Summary: GPT-5 has a hidden system prompt It looks like GPT-5 when accessed via the OpenAI API may have its own hidden system prompt, independent from the system prompt you can specify in an API call. At the very least…
-
The Register: LLM chatbots trivial to weaponise for data theft, say boffins
Source URL: https://www.theregister.com/2025/08/15/llm_chatbots_trivial_to_weaponise/ Source: The Register Title: LLM chatbots trivial to weaponise for data theft, say boffins Feedly Summary: System prompt engineering turns benign AI assistants into ‘investigator’ and ‘detective’ roles that bypass privacy guardrails A team of boffins is warning that AI chatbots built on large language models (LLM) can be tuned into malicious…
-
Simon Willison’s Weblog: My Lethal Trifecta talk at the Bay Area AI Security Meetup
Source URL: https://simonwillison.net/2025/Aug/9/bay-area-ai/#atom-everything Source: Simon Willison’s Weblog Title: My Lethal Trifecta talk at the Bay Area AI Security Meetup Feedly Summary: I gave a talk on Wednesday at the Bay Area AI Security Meetup about prompt injection, the lethal trifecta and the challenges of securing systems that use MCP. It wasn’t recorded but I’ve created…
-
Simon Willison’s Weblog: OpenAI: Introducing study mode
Source URL: https://simonwillison.net/2025/Jul/29/openai-introducing-study-mode/#atom-everything Source: Simon Willison’s Weblog Title: OpenAI: Introducing study mode Feedly Summary: OpenAI: Introducing study mode New ChatGPT feature, which can be triggered by typing /study or by visiting chatgpt.com/studymode. OpenAI say: Under the hood, study mode is powered by custom system instructions we’ve written in collaboration with teachers, scientists, and pedagogy experts…