Tag: release notes
-
Cloud Blog: Improving model performance with PyTorch/XLA 2.6
Source URL: https://cloud.google.com/blog/products/application-development/pytorch-xla-2-6-helps-improve-ai-model-performance/ Source: Cloud Blog Title: Improving model performance with PyTorch/XLA 2.6 Feedly Summary: For developers who want to use the PyTorch deep learning framework with Cloud TPUs, the PyTorch/XLA Python package is key, offering developers a way to run their PyTorch models on Cloud TPUs with only a few minor code changes. It…
-
Simon Willison’s Weblog: LLM 0.20
Source URL: https://simonwillison.net/2025/Jan/23/llm-020/#atom-everything Source: Simon Willison’s Weblog Title: LLM 0.20 Feedly Summary: LLM 0.20 New release of my LLM CLI tool and Python library. A bunch of accumulated fixes and features since the start of December, most notably: Support for OpenAI’s o1 model – a significant upgrade from o1-preview given its 200,000 input and 100,000…
-
Simon Willison’s Weblog: LLM 0.19
Source URL: https://simonwillison.net/2024/Dec/1/llm-019/ Source: Simon Willison’s Weblog Title: LLM 0.19 Feedly Summary: LLM 0.19 I just released version 0.19 of LLM, my Python library and CLI utility for working with Large Language Models. I released 0.18 a couple of weeks ago adding support for calling models from Python asyncio code. 0.19 improves on that, and…
-
Cloud Blog: Boost your Continuous Delivery pipeline with Generative AI
Source URL: https://cloud.google.com/blog/topics/developers-practitioners/boost-your-continuous-delivery-pipeline-with-generative-ai/ Source: Cloud Blog Title: Boost your Continuous Delivery pipeline with Generative AI Feedly Summary: In the domain of software development, AI-driven assistance is emerging as a transformative force to enhance developer experience and productivity and ultimately optimize overall software delivery performance. Many organizations started to leverage AI-based assistants, such as Gemini Code…