Simon Willison’s Weblog: Google Gemini URL Context

Source URL: https://simonwillison.net/2025/Aug/18/google-gemini-url-context/
Source: Simon Willison’s Weblog
Title: Google Gemini URL Context

Feedly Summary: Google Gemini URL Context
New feature in the Gemini API: you can now enable a url_context tool which the models can use to request the contents of URLs as part of replying to a prompt.
I released llm-gemini 0.25 with a new -o url_context 1 option adding support for this feature. You can try it out like this:
llm install -U llm-gemini
llm keys set gemini # If you need to set an API key
llm -m gemini-2.5-flash -o url_context 1 \
‘Latest headline on simonwillison.net’

Tokens from the fetched content are charged as input tokens. Use llm logs -c –usage to see that token count:
# 2025-08-18T23:52:46 conversation: 01k2zsk86pyp8p5v7py38pg3ge id: 01k2zsk17k1d03veax49532zs2

Model: **gemini/gemini-2.5-flash**

## Prompt

Latest headline on simonwillison.net

## Response

The latest headline on simonwillison.net as of August 17, 2025, is “TIL: Running a gpt-oss eval suite against LM Studio on a Mac.".

## Token usage

9,613 input, 87 output, {"candidatesTokenCount": 57, "promptTokensDetails": [{"modality": "TEXT", "tokenCount": 10}], "toolUsePromptTokenCount": 9603, "toolUsePromptTokensDetails": [{"modality": "TEXT", "tokenCount": 9603}], "thoughtsTokenCount": 30}

Via @OfficialLoganK
Tags: google, ai, generative-ai, llms, llm, gemini

AI Summary and Description: Yes

Summary: The text outlines a new feature in the Google Gemini API that allows models to fetch URLs’ contents and utilize them in responses. This feature is pertinent for AI development and security, as it introduces considerations around how data fetched from the web is handled and incorporated into AI responses.

Detailed Description:
The provided text details a significant enhancement to the Google Gemini API, particularly through the introduction of a `url_context` tool. This development is relevant in the fields of AI and infrastructure security due to the implications it has on data handling and token management during model interactions.

– **New Feature**: The Gemini API now includes a `url_context` tool enabling models to pull contents from URLs when generating responses to prompts.
– **Integration**: Developers can integrate this feature by installing `llm-gemini` version 0.25 and using the specified command to initiate URL context fetching.
– **Token Management**: The text indicates that tokens extracted from the fetched content will count as input tokens, emphasizing the importance of tracking token usage—critical for managing costs and efficient usage of API resources.
– **Example Use Case**: The prompt and response section shows a practical application of the new feature, where it fetches and displays the latest headline from a specified website as a demonstration.
– **Model Reference**: The mention of the `gemini/gemini-2.5-flash` model indicates specific capabilities and the context in which this feature is deployed.

Implications for professionals in security and compliance:
– **Data Security**: Organizations leveraging this feature need to ensure that data fetched from URLs does not compromise user privacy or data integrity.
– **Token Usage Monitoring**: Continuous monitoring of token usage is essential for budgeting and resource allocation in AI operations.
– **Use of External Content**: Understanding the security posture when integrating external content into AI processes can help mitigate risks related to data accuracy and malicious content.

This new feature represents a notable advancement in enhancing interactive AI capabilities while also posing new challenges in maintaining security and compliance in AI systems.