Tag: prompts

  • Simon Willison’s Weblog: Create and edit images with Gemini 2.0 in preview

    Source URL: https://simonwillison.net/2025/May/7/gemini-images-preview/#atom-everything Source: Simon Willison’s Weblog Title: Create and edit images with Gemini 2.0 in preview Feedly Summary: Create and edit images with Gemini 2.0 in preview Gemini 2.0 Flash has had image generation capabilities for a while now, and they’re now available via the paid Gemini API – at 3.9 cents per generated…

  • Cloud Blog: Guide to build MCP servers using vibe coding with Gemini 2.5 Pro

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/build-mcp-servers-using-vibe-coding-with-gemini-2-5-pro/ Source: Cloud Blog Title: Guide to build MCP servers using vibe coding with Gemini 2.5 Pro Feedly Summary: Have you ever had something on the tip of your tongue, but you weren’t exactly sure how to describe what’s in your mind?  For developers, this is where “vibe coding " comes in. Vibe…

  • Simon Willison’s Weblog: Saying "hi" to Microsoft’s Phi-4-reasoning

    Source URL: https://simonwillison.net/2025/May/6/phi-4-reasoning/#atom-everything Source: Simon Willison’s Weblog Title: Saying "hi" to Microsoft’s Phi-4-reasoning Feedly Summary: Microsoft released a new sub-family of models a few days ago: Phi-4 reasoning. They introduced them in this blog post celebrating a year since the release of Phi-3: Today, we are excited to introduce Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning – marking…

  • Slashdot: UnitedHealth Now Has 1,000 AI Applications In Production

    Source URL: https://science.slashdot.org/story/25/05/05/2050232/unitedhealth-now-has-1000-ai-applications-in-production?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: UnitedHealth Now Has 1,000 AI Applications In Production Feedly Summary: AI Summary and Description: Yes Summary: UnitedHealth Group is leveraging 1,000 AI applications across various segments, including insurance and healthcare. A significant portion of these initiatives involves generative AI, highlighting both its potential and the inherent challenges, particularly in…

  • Simon Willison’s Weblog: Feed a video to a vision LLM as a sequence of JPEG frames on the CLI (also LLM 0.25)

    Source URL: https://simonwillison.net/2025/May/5/llm-video-frames/#atom-everything Source: Simon Willison’s Weblog Title: Feed a video to a vision LLM as a sequence of JPEG frames on the CLI (also LLM 0.25) Feedly Summary: The new llm-video-frames plugin can turn a video file into a sequence of JPEG frames and feed them directly into a long context vision LLM such…

  • Cloud Blog: Announcing new Vertex AI Prediction Dedicated Endpoints

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/reliable-ai-with-vertex-ai-prediction-dedicated-endpoints/ Source: Cloud Blog Title: Announcing new Vertex AI Prediction Dedicated Endpoints Feedly Summary: For AI developers building cutting-edge applications with large model sizes, a reliable foundation is non-negotiable. You need your AI to perform consistently, delivering results without hiccups, even under pressure. This means having dedicated resources that won’t get bogged down…

  • Simon Willison’s Weblog: Dummy’s Guide to Modern LLM Sampling

    Source URL: https://simonwillison.net/2025/May/4/llm-sampling/#atom-everything Source: Simon Willison’s Weblog Title: Dummy’s Guide to Modern LLM Sampling Feedly Summary: Dummy’s Guide to Modern LLM Sampling This is an extremely useful, detailed set of explanations by @AlpinDale covering the various different sampling strategies used by modern LLMs. LLMs return a set of next-token probabilities for every token in their…

  • Simon Willison’s Weblog: Qwen3-8B

    Source URL: https://simonwillison.net/2025/May/2/qwen3-8b/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3-8B Feedly Summary: Having tried a few of the Qwen 3 models now my favorite is a bit of a surprise to me: I’m really enjoying Qwen3-8B. I’ve been running prompts through the MLX 4bit quantized version, mlx-community/Qwen3-8B-4bit. I’m using llm-mlx like this: llm install llm-mlx llm…