Tag: Python library
-
Simon Willison’s Weblog: llm-mistral 0.14
Source URL: https://simonwillison.net/2025/May/29/llm-mistral-014/#atom-everything Source: Simon Willison’s Weblog Title: llm-mistral 0.14 Feedly Summary: llm-mistral 0.14 I added tool-support to my plugin for accessing the Mistral API from LLM today, plus support for Mistral’s new Codestral Embed embedding model. An interesting challenge here is that I’m not using an official client library for llm-mistral – I rolled…
-
Simon Willison’s Weblog: Large Language Models can run tools in your terminal with LLM 0.26
Source URL: https://simonwillison.net/2025/May/27/llm-tools/ Source: Simon Willison’s Weblog Title: Large Language Models can run tools in your terminal with LLM 0.26 Feedly Summary: LLM 0.26 is out with the biggest new feature since I started the project: support for tools. You can now use the LLM CLI tool – and Python library – to grant LLMs…
-
Simon Willison’s Weblog: LLM 0.26a0 adds support for tools!
Source URL: https://simonwillison.net/2025/May/14/llm-adds-support-for-tools/#atom-everything Source: Simon Willison’s Weblog Title: LLM 0.26a0 adds support for tools! Feedly Summary: LLM 0.26a0 adds support for tools! It’s only an alpha so I’m not going to promote this extensively yet, but my LLM project just grew a feature I’ve been working towards for nearly two years now: tool support! I’m…
-
Simon Willison’s Weblog: Qwen 3 offers a case study in how to effectively release a model
Source URL: https://simonwillison.net/2025/Apr/29/qwen-3/ Source: Simon Willison’s Weblog Title: Qwen 3 offers a case study in how to effectively release a model Feedly Summary: Alibaba’s Qwen team released the hotly anticipated Qwen 3 model family today. The Qwen models are already some of the best open weight models – Apache 2.0 licensed and with a variety…
-
Cloud Blog: Introducing BigQuery DataFrames 2.0 for the era of multimodal data science
Source URL: https://cloud.google.com/blog/products/data-analytics/a-closer-look-at-bigquery-dataframes-2-0/ Source: Cloud Blog Title: Introducing BigQuery DataFrames 2.0 for the era of multimodal data science Feedly Summary: For data scientists and ML engineers, building analysis and models in Python is almost second nature, and Python’s popularity in the data science community has only skyrocketed with the recent generative AI boom. We believe…
-
Cloud Blog: MCP Toolbox for Databases: Simplify AI Agent Access to Enterprise Data
Source URL: https://cloud.google.com/blog/products/ai-machine-learning/mcp-toolbox-for-databases-now-supports-model-context-protocol/ Source: Cloud Blog Title: MCP Toolbox for Databases: Simplify AI Agent Access to Enterprise Data Feedly Summary: At Google Cloud Next 25, we announced incredible ways for enterprises to build multi-agent ecosystems with Vertex AI and Google Cloud Databases – including better ways for agents to communicate with each other using Agent2Agent…
-
Simon Willison’s Weblog: Long context support in LLM 0.24 using fragments and template plugins
Source URL: https://simonwillison.net/2025/Apr/7/long-context-llm/#atom-everything Source: Simon Willison’s Weblog Title: Long context support in LLM 0.24 using fragments and template plugins Feedly Summary: LLM 0.24 is now available with new features to help take advantage of the increasingly long input context supported by modern LLMs. (LLM is my command-line tool and Python library for interacting with LLMs,…
-
Simon Willison’s Weblog: smartfunc
Source URL: https://simonwillison.net/2025/Apr/3/smartfunc/ Source: Simon Willison’s Weblog Title: smartfunc Feedly Summary: smartfunc Vincent D. Warmerdam built this ingenious wrapper around my LLM Python library which lets you build LLM wrapper functions using a decorator and a docstring: from smartfunc import backend @backend(“gpt-4o") def generate_summary(text: str): """Generate a summary of the following text: """ pass summary…
-
Cloud Blog: Speed up checkpoint loading time at scale using Orbax on JAX
Source URL: https://cloud.google.com/blog/products/compute/unlock-faster-workload-start-time-using-orbax-on-jax/ Source: Cloud Blog Title: Speed up checkpoint loading time at scale using Orbax on JAX Feedly Summary: Imagine training a new AI / ML model like Gemma 3 or Llama 3.3 across hundreds of powerful accelerators like TPUs or GPUs to achieve a scientific breakthrough. You might have a team of powerful…
-
Hacker News: Show HN: Nuanced – Help AI understand code structure, not just text
Source URL: https://www.nuanced.dev/blog/initial-launch Source: Hacker News Title: Show HN: Nuanced – Help AI understand code structure, not just text Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text introduces Nuanced, an open-source Python library designed to enhance the capabilities of AI coding assistants by providing a structured representation of code dependencies through call…