Tag: .NET
-
Simon Willison’s Weblog: ChatGPT agent’s user-agent
Source URL: https://simonwillison.net/2025/Aug/4/chatgpt-agents-user-agent/#atom-everything Source: Simon Willison’s Weblog Title: ChatGPT agent’s user-agent Feedly Summary: I was exploring how ChatGPT agent works today. I learned some interesting things about how it exposes its identity through HTTP headers, then made a huge blunder in thinking it was leaking its URLs to Bingbot and Yandex… but it turned out…
-
Simon Willison’s Weblog: ChatGPT agent triggers crawls from Bingbot and Yandex
Source URL: https://simonwillison.net/2025/Aug/4/chatgpt-agents-agent/#atom-everything Source: Simon Willison’s Weblog Title: ChatGPT agent triggers crawls from Bingbot and Yandex Feedly Summary: ChatGPT agent is the recently released (and confusingly named) ChatGPT feature that provides browser automation combined with terminal access as a feature of ChatGPT – replacing their previous Operator research preview which is scheduled for deprecation on…
-
Simon Willison’s Weblog: Usage charts for my LLM tool against OpenRouter
Source URL: https://simonwillison.net/2025/Aug/4/llm-openrouter-usage/#atom-everything Source: Simon Willison’s Weblog Title: Usage charts for my LLM tool against OpenRouter Feedly Summary: Usage charts for my LLM tool against OpenRouter OpenRouter proxies requests to a large number of different LLMs and provides high level statistics of which models are the most popular among their users. Tools that call OpenRouter…
-
Simon Willison’s Weblog: Qwen-Image: Crafting with Native Text Rendering
Source URL: https://simonwillison.net/2025/Aug/4/qwen-image/#atom-everything Source: Simon Willison’s Weblog Title: Qwen-Image: Crafting with Native Text Rendering Feedly Summary: Qwen-Image: Crafting with Native Text Rendering Not content with releasing six excellent open weights LLMs in July, Qwen are kicking off August with their first ever image generation model. Qwen-Image is a 20 billion parameter MMDiT (Multimodal Diffusion Transformer,…
-
Simon Willison’s Weblog: Quoting @himbodhisattva
Source URL: https://simonwillison.net/2025/Aug/4/himbodhisattva/#atom-everything Source: Simon Willison’s Weblog Title: Quoting @himbodhisattva Feedly Summary: for services that wrap GPT-3, is it possible to do the equivalent of sql injection? like, a prompt-injection attack? make it think it’s completed the task and then get access to the generation, and ask it to repeat the original instruction? — @himbodhisattva,…
-
Simon Willison’s Weblog: Quoting Nick Turley
Source URL: https://simonwillison.net/2025/Aug/4/nick-turley/ Source: Simon Willison’s Weblog Title: Quoting Nick Turley Feedly Summary: This week, ChatGPT is on track to reach 700M weekly active users — up from 500M at the end of March and 4× since last year. — Nick Turley, Head of ChatGPT, OpenAI Tags: openai, chatgpt, ai AI Summary and Description: Yes…
-
Simon Willison’s Weblog: The ChatGPT sharing dialog demonstrates how difficult it is to design privacy preferences
Source URL: https://simonwillison.net/2025/Aug/3/privacy-design/ Source: Simon Willison’s Weblog Title: The ChatGPT sharing dialog demonstrates how difficult it is to design privacy preferences Feedly Summary: ChatGPT just removed their “make this chat discoverable" sharing feature, after it turned out a material volume of users had inadvertantly made their private chats available via Google search. Dane Stuckey, CISO…
-
Simon Willison’s Weblog: XBai o4
Source URL: https://simonwillison.net/2025/Aug/3/xbai-o4/#atom-everything Source: Simon Willison’s Weblog Title: XBai o4 Feedly Summary: XBai o4 Yet another open source (Apache 2.0) LLM from a Chinese AI lab. This model card claims: XBai o4 excels in complex reasoning capabilities and has now completely surpassed OpenAI-o3-mini in Medium mode. This a 32.8 billion parameter model released by MetaStone…
-
Simon Willison’s Weblog: Faster inference
Source URL: https://simonwillison.net/2025/Aug/1/faster-inference/ Source: Simon Willison’s Weblog Title: Faster inference Feedly Summary: Two interesting examples of inference speed as a flagship feature of LLM services today. First, Cerebras announced two new monthly plans for their extremely high speed hosted model service: Cerebras Code Pro ($50/month, 1,000 messages a day) and Cerebras Code Max ($200/month, 5,000/day).…
-
Simon Willison’s Weblog: Deep Think in the Gemini app
Source URL: https://simonwillison.net/2025/Aug/1/deep-think-in-the-gemini-app/ Source: Simon Willison’s Weblog Title: Deep Think in the Gemini app Feedly Summary: Deep Think in the Gemini app Google released Gemini 2.5 Deep Think this morning, exclusively to their Ultra ($250/month) subscribers: It is a variation of the model that recently achieved the gold-medal standard at this year’s International Mathematical Olympiad…