Tag: pytorch

  • Cloud Blog: Connect Spark data pipelines to Gemini and other AI models with Dataproc ML library

    Source URL: https://cloud.google.com/blog/products/data-analytics/gemini-and-vertex-ai-for-spark-with-dataproc-ml-library/ Source: Cloud Blog Title: Connect Spark data pipelines to Gemini and other AI models with Dataproc ML library Feedly Summary: Many data science teams rely on Apache Spark running on Dataproc managed clusters for powerful, large-scale data preparation. As these teams look to connect their data pipelines directly to machine learning models,…

  • Cloud Blog: AI Innovators: How JAX on TPU is helping Escalante advance AI-driven protein design

    Source URL: https://cloud.google.com/blog/topics/customers/escalante-uses-jax-on-tpus-for-ai-driven-protein-design/ Source: Cloud Blog Title: AI Innovators: How JAX on TPU is helping Escalante advance AI-driven protein design Feedly Summary: As a Python library for accelerator-oriented array computation and program transformation, JAX is widely recognized for its power in training large-scale AI models. But its core design as a system for composable function…

  • Cloud Blog: Supercharge ML performance on xPUs with the new XProf profiler and Cloud Diagnostics XProf library

    Source URL: https://cloud.google.com/blog/topics/developers-practitioners/supercharge-ml-performance-on-xpus-with-the-new-xprof-profiler-and-cloud-diagnostics-xprof-library/ Source: Cloud Blog Title: Supercharge ML performance on xPUs with the new XProf profiler and Cloud Diagnostics XProf library Feedly Summary: Are you spending more time debugging ML model performance than you are building? You’re not alone. In today’s fast-paced AI landscape, optimizing models is a complex challenge, from navigating new model…

  • Simon Willison’s Weblog: Defeating Nondeterminism in LLM Inference

    Source URL: https://simonwillison.net/2025/Sep/11/defeating-nondeterminism/#atom-everything Source: Simon Willison’s Weblog Title: Defeating Nondeterminism in LLM Inference Feedly Summary: Defeating Nondeterminism in LLM Inference A very common question I see about LLMs concerns why they can’t be made to deliver the same response to the same prompt by setting a fixed random number seed. Like many others I had…

  • Cloud Blog: Announcing a new monitoring library to optimize TPU performance

    Source URL: https://cloud.google.com/blog/products/compute/new-monitoring-library-to-optimize-google-cloud-tpu-resources/ Source: Cloud Blog Title: Announcing a new monitoring library to optimize TPU performance Feedly Summary: For more than a decade, TPUs have powered Google’s most demanding AI training and serving workloads. And there is strong demand from customers for Cloud TPUs as well. When running advanced AI workloads, you need to be…