Tag: token count

  • Simon Willison’s Weblog: Google Gemini URL Context

    Source URL: https://simonwillison.net/2025/Aug/18/google-gemini-url-context/ Source: Simon Willison’s Weblog Title: Google Gemini URL Context Feedly Summary: Google Gemini URL Context New feature in the Gemini API: you can now enable a url_context tool which the models can use to request the contents of URLs as part of replying to a prompt. I released llm-gemini 0.25 with a…

  • Tomasz Tunguz: The Surprising Input-to-Output Ratio of AI Models

    Source URL: https://www.tomtunguz.com/input-output-ratio/ Source: Tomasz Tunguz Title: The Surprising Input-to-Output Ratio of AI Models Feedly Summary: When you query an AI model, it gathers relevant information to generate an answer. For a while, I’ve wondered : how much information does the model need to answer a question? I thought the output would be larger, however…

  • Simon Willison’s Weblog: Trying out the new Gemini 2.5 model family

    Source URL: https://simonwillison.net/2025/Jun/17/gemini-2-5/ Source: Simon Willison’s Weblog Title: Trying out the new Gemini 2.5 model family Feedly Summary: After many months of previews, Gemini 2.5 Pro and Flash have reached general availability with new, memorable model IDs: gemini-2.5-pro and gemini-2.5-flash. They are joined by a new preview model with an unmemorable name: gemini-2.5-flash-lite-preview-06-17 is a…

  • Simon Willison’s Weblog: llm-mistral 0.14

    Source URL: https://simonwillison.net/2025/May/29/llm-mistral-014/#atom-everything Source: Simon Willison’s Weblog Title: llm-mistral 0.14 Feedly Summary: llm-mistral 0.14 I added tool-support to my plugin for accessing the Mistral API from LLM today, plus support for Mistral’s new Codestral Embed embedding model. An interesting challenge here is that I’m not using an official client library for llm-mistral – I rolled…

  • Cloud Blog: Announcing Anthropic’s Claude Opus 4 and Claude Sonnet 4 on Vertex AI

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/anthropics-claude-opus-4-and-claude-sonnet-4-on-vertex-ai/ Source: Cloud Blog Title: Announcing Anthropic’s Claude Opus 4 and Claude Sonnet 4 on Vertex AI Feedly Summary: Today, we’re expanding the choice of third-party models available in Vertex AI Model Garden with the addition of Anthropic’s newest generation of the Claude model family: Claude Opus 4 and Claude Sonnet 4. Both…

  • Simon Willison’s Weblog: Gemini 2.5: Our most intelligent models are getting even better

    Source URL: https://simonwillison.net/2025/May/20/gemini-25/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.5: Our most intelligent models are getting even better Feedly Summary: Gemini 2.5: Our most intelligent models are getting even better A bunch of new Gemini 2.5 announcements at Google I/O today. 2.5 Flash and 2.5 Pro are both getting audio output (previously previewed in Gemini…

  • Simon Willison’s Weblog: Maybe Meta’s Llama claims to be open source because of the EU AI act

    Source URL: https://simonwillison.net/2025/Apr/19/llama-eu-ai-act/#atom-everything Source: Simon Willison’s Weblog Title: Maybe Meta’s Llama claims to be open source because of the EU AI act Feedly Summary: I encountered a theory a while ago that one of the reasons Meta insist on using the term “open source” for their Llama models despite the Llama license not actually conforming…

  • Simon Willison’s Weblog: Gemini 2.5 Pro Preview pricing

    Source URL: https://simonwillison.net/2025/Apr/4/gemini-25-pro-pricing/ Source: Simon Willison’s Weblog Title: Gemini 2.5 Pro Preview pricing Feedly Summary: Gemini 2.5 Pro Preview pricing Google’s Gemini 2.5 Pro is currently the top model on LM Arena and, from my own testing, a superb model for OCR, audio transcription and long-context coding. You can now pay for it! The new…