Tag: vllm

  • Simon Willison’s Weblog: Defeating Nondeterminism in LLM Inference

    Source URL: https://simonwillison.net/2025/Sep/11/defeating-nondeterminism/#atom-everything Source: Simon Willison’s Weblog Title: Defeating Nondeterminism in LLM Inference Feedly Summary: Defeating Nondeterminism in LLM Inference A very common question I see about LLMs concerns why they can’t be made to deliver the same response to the same prompt by setting a fixed random number seed. Like many others I had…

  • Cloud Blog: Fast and efficient AI inference with new NVIDIA Dynamo recipe on AI Hypercomputer

    Source URL: https://cloud.google.com/blog/products/compute/ai-inference-recipe-using-nvidia-dynamo-with-ai-hypercomputer/ Source: Cloud Blog Title: Fast and efficient AI inference with new NVIDIA Dynamo recipe on AI Hypercomputer Feedly Summary: As generative AI becomes more widespread, it’s important for developers and ML engineers to be able to easily configure infrastructure that supports efficient AI inference, i.e., using a trained AI model to make…

  • Cloud Blog: How Baseten achieves 225% better cost-performance for AI inference (and you can too)

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/how-baseten-achieves-better-cost-performance-for-ai-inference/ Source: Cloud Blog Title: How Baseten achieves 225% better cost-performance for AI inference (and you can too) Feedly Summary: Baseten is one of a growing number of AI infrastructure providers, helping other startups run their models and experiments at speed and scale. Given the importance of those two factors to its customers,…

  • The Cloudflare Blog: How we built the most efficient inference engine for Cloudflare’s network

    Source URL: https://blog.cloudflare.com/cloudflares-most-efficient-ai-inference-engine/ Source: The Cloudflare Blog Title: How we built the most efficient inference engine for Cloudflare’s network Feedly Summary: Infire is an LLM inference engine that employs a range of techniques to maximize resource utilization, allowing us to serve AI models more efficiently with better performance for Cloudflare workloads. AI Summary and Description:…

  • Simon Willison’s Weblog: Open weight LLMs exhibit inconsistent performance across providers

    Source URL: https://simonwillison.net/2025/Aug/15/inconsistent-performance/ Source: Simon Willison’s Weblog Title: Open weight LLMs exhibit inconsistent performance across providers Feedly Summary: Artificial Analysis published a new benchmark the other day, this time focusing on how an individual model – OpenAI’s gpt-oss-120b – performs across different hosted providers. The results showed some surprising differences. Here’s the one with the…

  • Cloud Blog: Start and scale your apps faster with improved container image streaming in GKE

    Source URL: https://cloud.google.com/blog/products/containers-kubernetes/improving-gke-container-image-streaming-for-faster-app-startup/ Source: Cloud Blog Title: Start and scale your apps faster with improved container image streaming in GKE Feedly Summary: In today’s fast-paced cloud-native world, the speed at which your applications can start and scale is paramount. Faster pod startup times mean quicker responses to user demand, more efficient resource utilization, and a…