Tag: embedding model
-
Cloud Blog: Introducing BigQuery ObjectRef: Supercharge your multimodal data and AI processing
Source URL: https://cloud.google.com/blog/products/data-analytics/new-objectref-data-type-brings-unstructured-data-into-bigquery/ Source: Cloud Blog Title: Introducing BigQuery ObjectRef: Supercharge your multimodal data and AI processing Feedly Summary: Traditional data warehouses simply can’t keep up with today’s analytics workloads. That’s because today, most data that’s generated is both unstructured and multimodal (documents, audio files, images, and videos). With the complexity of cleaning and transforming…
-
Cloud Blog: New G4 VMs with NVIDIA RTX PRO 6000 Blackwell power AI, graphics, gaming and beyond
Source URL: https://cloud.google.com/blog/products/compute/introducing-g4-vm-with-nvidia-rtx-pro-6000/ Source: Cloud Blog Title: New G4 VMs with NVIDIA RTX PRO 6000 Blackwell power AI, graphics, gaming and beyond Feedly Summary: Today, we’re excited to announce the preview of our new G4 VMs based on NVIDIA RTX PRO 6000 Blackwell Server edition — the first cloud provider to do so. This follows…
-
Simon Willison’s Weblog: Qwen3 Embedding
Source URL: https://simonwillison.net/2025/Jun/8/qwen3-embedding/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3 Embedding Feedly Summary: Qwen3 Embedding New family of embedding models from Qwen, in three sizes: 0.6B, 4B, 8B – and two categories: Text Embedding and Text Reranking. The full collection can be browsed on Hugging Face. The smallest available model is the 0.6B Q8 one, which…
-
Cloud Blog: Google is a Leader in the 2025 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms report
Source URL: https://cloud.google.com/blog/products/ai-machine-learning/gartner-2025-magic-quadrant-for-data-science-and-ml-platforms/ Source: Cloud Blog Title: Google is a Leader in the 2025 Gartner® Magic Quadrant™ for Data Science and Machine Learning Platforms report Feedly Summary: Today, we are excited to announce that Gartner® has named Google as a Leader in the 2025 Magic Quadrant™ for Data Science and Machine Learning Platforms report (DSML).…
-
Simon Willison’s Weblog: llm-mistral 0.14
Source URL: https://simonwillison.net/2025/May/29/llm-mistral-014/#atom-everything Source: Simon Willison’s Weblog Title: llm-mistral 0.14 Feedly Summary: llm-mistral 0.14 I added tool-support to my plugin for accessing the Mistral API from LLM today, plus support for Mistral’s new Codestral Embed embedding model. An interesting challenge here is that I’m not using an official client library for llm-mistral – I rolled…
-
Simon Willison’s Weblog: Codestral Embed
Source URL: https://simonwillison.net/2025/May/28/codestral-embed/#atom-everything Source: Simon Willison’s Weblog Title: Codestral Embed Feedly Summary: Codestral Embed Brand new embedding model from Mistral, specifically trained for code. Mistral claim that: Codestral Embed significantly outperforms leading code embedders in the market today: Voyage Code 3, Cohere Embed v4.0 and OpenAI’s large embedding model. The model is designed to work…
-
Simon Willison’s Weblog: Build AI agents with the Mistral Agents API
Source URL: https://simonwillison.net/2025/May/27/mistral-agents-api/ Source: Simon Willison’s Weblog Title: Build AI agents with the Mistral Agents API Feedly Summary: Build AI agents with the Mistral Agents API Big upgrade to Mistral’s API this morning: they’ve announced a new “Agents API". Mistral have been using the term "agents" for a while now. Here’s how they describe them:…
-
Cloud Blog: Google Cloud and Spring AI 1.0
Source URL: https://cloud.google.com/blog/topics/developers-practitioners/google-cloud-and-spring-ai-10/ Source: Cloud Blog Title: Google Cloud and Spring AI 1.0 Feedly Summary: A big thank you to Fran Hinkelmann and Aaron Wanjala for their contributions and support in making this blog post happen.After a period of intense development, Spring AI 1.0 has officially landed, bringing a robust and comprehensive solution for AI…
-
Simon Willison’s Weblog: Long context support in LLM 0.24 using fragments and template plugins
Source URL: https://simonwillison.net/2025/Apr/7/long-context-llm/#atom-everything Source: Simon Willison’s Weblog Title: Long context support in LLM 0.24 using fragments and template plugins Feedly Summary: LLM 0.24 is now available with new features to help take advantage of the increasingly long input context supported by modern LLMs. (LLM is my command-line tool and Python library for interacting with LLMs,…
-
Simon Willison’s Weblog: Nomic Embed Code: A State-of-the-Art Code Retriever
Source URL: https://simonwillison.net/2025/Mar/27/nomic-embed-code/ Source: Simon Willison’s Weblog Title: Nomic Embed Code: A State-of-the-Art Code Retriever Feedly Summary: Nomic Embed Code: A State-of-the-Art Code Retriever Nomic have released a new embedding model that specializes in code, based on their CoRNStack “large-scale high-quality training dataset specifically curated for code retrieval". The nomic-embed-code model is pretty large –…