Tag: accelerator
-
The Register: Uncle Sam floats tracking tech to keep AI chips out of China
Source URL: https://www.theregister.com/2025/08/05/us_ai_chip_tracking/ Source: The Register Title: Uncle Sam floats tracking tech to keep AI chips out of China Feedly Summary: Plan would embed location verification in advanced semiconductors to combat black market exports The Trump administration wants better ways to track the location of chips, as part of attempts to prevent advanced AI accelerator…
-
Cloud Blog: Understanding Calendar mode for Dynamic Workload Scheduler: Reserve ML GPUs and TPUs
Source URL: https://cloud.google.com/blog/products/compute/dynamic-workload-scheduler-calendar-mode-reserves-gpus-and-tpus/ Source: Cloud Blog Title: Understanding Calendar mode for Dynamic Workload Scheduler: Reserve ML GPUs and TPUs Feedly Summary: Organizations need ML compute resources that can accommodate bursty peaks and periodic troughs. That means the consumption models for AI infrastructure need to evolve to be more cost-efficient, provide term flexibility, and support rapid…
-
The Register: How AI chip upstart FuriosaAI won over LG with its power-sipping design
Source URL: https://www.theregister.com/2025/07/22/sk_furiosa_ai_lg/ Source: The Register Title: How AI chip upstart FuriosaAI won over LG with its power-sipping design Feedly Summary: Testing shows RNGD chips up to 2.25x higher performance per watt than…. five-year-old Nvidia silicon South Korean AI chip startup FuriosaAI scored a major customer win this week after LG’s AI Research division tapped…
-
Cloud Blog: Announcing a new monitoring library to optimize TPU performance
Source URL: https://cloud.google.com/blog/products/compute/new-monitoring-library-to-optimize-google-cloud-tpu-resources/ Source: Cloud Blog Title: Announcing a new monitoring library to optimize TPU performance Feedly Summary: For more than a decade, TPUs have powered Google’s most demanding AI training and serving workloads. And there is strong demand from customers for Cloud TPUs as well. When running advanced AI workloads, you need to be…