Tag: GKE
-
Cloud Blog: Back to AI school: New Google Cloud training to future-proof your AI skills
Source URL: https://cloud.google.com/blog/topics/training-certifications/new-google-cloud-training-to-future-proof-ai-skills/ Source: Cloud Blog Title: Back to AI school: New Google Cloud training to future-proof your AI skills Feedly Summary: Getting ahead — and staying ahead — of the demand for AI skills isn’t just key for those looking for a new role. Research shows proving your skills through credentials drives promotion, salary…
-
Cloud Blog: GKE network interface at 10: From core connectivity to the AI backbone
Source URL: https://cloud.google.com/blog/products/networking/gke-network-interface-from-kubenet-to-ebpfcilium-to-dranet/ Source: Cloud Blog Title: GKE network interface at 10: From core connectivity to the AI backbone Feedly Summary: It’s hard to believe it’s been over 10 years since Kubernetes first set sail, fundamentally changing how we build, deploy, and manage applications. Google Cloud was at the forefront of the Kubernetes revolution with…
-
Cloud Blog: Cloud CISO Perspectives: APAC security leaders speak out on AI and key topics
Source URL: https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-apac-security-leaders-speak-out-on-ai/ Source: Cloud Blog Title: Cloud CISO Perspectives: APAC security leaders speak out on AI and key topics Feedly Summary: Welcome to the first Cloud CISO Perspectives for September 2025. Today, Daryl Pereira and Hui Meng Foo, from our Office of the CISO’s Asia-Pacific office, share insights on AI from security leaders who…
-
Cloud Blog: Scaling high-performance inference cost-effectively
Source URL: https://cloud.google.com/blog/products/ai-machine-learning/gke-inference-gateway-and-quickstart-are-ga/ Source: Cloud Blog Title: Scaling high-performance inference cost-effectively Feedly Summary: At Google Cloud Next 2025, we announced new inference capabilities with GKE Inference Gateway, including support for vLLM on TPUs, Ironwood TPUs, and Anywhere Cache. Our inference solution is based on AI Hypercomputer, a system built on our experience running models like…