Tag: latency requirements
-
Cloud Blog: Fast and efficient AI inference with new NVIDIA Dynamo recipe on AI Hypercomputer
Source URL: https://cloud.google.com/blog/products/compute/ai-inference-recipe-using-nvidia-dynamo-with-ai-hypercomputer/ Source: Cloud Blog Title: Fast and efficient AI inference with new NVIDIA Dynamo recipe on AI Hypercomputer Feedly Summary: As generative AI becomes more widespread, it’s important for developers and ML engineers to be able to easily configure infrastructure that supports efficient AI inference, i.e., using a trained AI model to make…
-
Cloud Blog: vLLM Performance Tuning: The Ultimate Guide to xPU Inference Configuration
Source URL: https://cloud.google.com/blog/topics/developers-practitioners/vllm-performance-tuning-the-ultimate-guide-to-xpu-inference-configuration/ Source: Cloud Blog Title: vLLM Performance Tuning: The Ultimate Guide to xPU Inference Configuration Feedly Summary: Additional contributors include Hossein Sarshar, Ashish Narasimham, and Chenyang Li. Large Language Models (LLMs) are revolutionizing how we interact with technology, but serving these powerful models efficiently can be a challenge. vLLM has rapidly become…
-
Cloud Blog: Rightsizing LLM Serving on vLLM for GPUs and TPUs
Source URL: https://cloud.google.com/blog/topics/developers-practitioners/rightsizing-llm-serving-on-vllm-for-gpus-and-tpus/ Source: Cloud Blog Title: Rightsizing LLM Serving on vLLM for GPUs and TPUs Feedly Summary: Additional contributors include Hossein Sarshar and Ashish Narasimham. Large Language Models (LLMs) are revolutionizing how we interact with technology, but serving these powerful models efficiently can be a challenge. vLLM has rapidly become the primary choice for…