Tag: Kubernetes Engine

  • Cloud Blog: How to build a digital twin to boost resilience

    Source URL: https://cloud.google.com/blog/products/identity-security/how-to-build-a-digital-twin-to-boost-resilience/ Source: Cloud Blog Title: How to build a digital twin to boost resilience Feedly Summary: “There’s no red teaming on the factory floor,” isn’t an OSHA safety warning, but it should be — and for good reason. Adversarial testing in most, if not all, manufacturing production environments is prohibited because the safety…

  • Cloud Blog: Train AI for less: Improve ML Goodput with elastic training and optimized checkpointing

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/elastic-training-and-optimized-checkpointing-improve-ml-goodput/ Source: Cloud Blog Title: Train AI for less: Improve ML Goodput with elastic training and optimized checkpointing Feedly Summary: Want to save some money on large AI training? For a typical PyTorch LLM training workload that spans thousands of accelerators for several weeks, a 1% improvement in ML Goodput can translate to…

  • Cloud Blog: How Confidential Computing lays the foundation for trusted AI

    Source URL: https://cloud.google.com/blog/products/identity-security/how-confidential-computing-lays-the-foundation-for-trusted-ai/ Source: Cloud Blog Title: How Confidential Computing lays the foundation for trusted AI Feedly Summary: Confidential Computing has redefined how organizations can securely process their sensitive workloads in the cloud. The growth in our hardware ecosystem is fueling a new wave of adoption, enabling customers to use Confidential Computing to support cutting-edge…

  • Cloud Blog: Introducing the next generation of AI inference, powered by llm-d

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/enhancing-vllm-for-distributed-inference-with-llm-d/ Source: Cloud Blog Title: Introducing the next generation of AI inference, powered by llm-d Feedly Summary: As the world transitions from prototyping AI solutions to deploying AI at scale, efficient AI inference is becoming the gating factor. Two years ago, the challenge was the ever-growing size of AI models. Cloud infrastructure providers…

  • Cloud Blog: AI Hypercomputer developer experience enhancements from Q1 25: build faster, scale bigger

    Source URL: https://cloud.google.com/blog/products/compute/ai-hypercomputer-enhancements-for-the-developer/ Source: Cloud Blog Title: AI Hypercomputer developer experience enhancements from Q1 25: build faster, scale bigger Feedly Summary: Building cutting-edge AI models is exciting, whether you’re iterating in your notebook or orchestrating large clusters. However, scaling up training can present significant challenges, including navigating complex infrastructure, configuring software and dependencies across numerous…

  • Cloud Blog: From LLMs to image generation: Accelerate inference workloads with AI Hypercomputer

    Source URL: https://cloud.google.com/blog/products/compute/ai-hypercomputer-inference-updates-for-google-cloud-tpu-and-gpu/ Source: Cloud Blog Title: From LLMs to image generation: Accelerate inference workloads with AI Hypercomputer Feedly Summary: From retail to gaming, from code generation to customer care, an increasing number of organizations are running LLM-based applications, with 78% of organizations in development or production today. As the number of generative AI applications…

  • Cloud Blog: Cloud CISO Perspectives: 27 security announcements at Next ‘25

    Source URL: https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-27-security-announcements-next-25/ Source: Cloud Blog Title: Cloud CISO Perspectives: 27 security announcements at Next ‘25 Feedly Summary: Welcome to the first Cloud CISO Perspectives for April 2025. Today, Google Cloud Security’s Peter Bailey reviews our top 27 security announcements from Next ‘25.As with all Cloud CISO Perspectives, the contents of this newsletter are posted…