Cloud Blog: A guide to Google ADK and MCP integration with an external server

Source URL: https://cloud.google.com/blog/topics/developers-practitioners/use-google-adk-and-mcp-with-an-external-server/
Source: Cloud Blog
Title: A guide to Google ADK and MCP integration with an external server

Feedly Summary: For AI-powered agents to perform useful, real-world tasks, they need to reliably access tools and up-to-the-minute information that lives outside the base model. Anthropic’s Model Context Protocol (MCP) is designed to address this, providing a standardized way for agents to retrieve that crucial, external context needed to inform their responses and actions.
This is vital for developers who need to build and deploy sophisticated agents that can leverage enterprise data or public tools. But integrating agents built with Google’s Agent Development Kit (ADK) to communicate effectively with an MCP server, especially one hosted externally, might present some integration challenges.
Today, we’ll guide you through developing ADK agents that connect to external MCP servers, initially using Server-Sent Events (SSE). We’ll take an example of an ADK agent leveraging MCP to access Wikipedia articles, which is a common use case to retrieve external specialised data. We will also introduce Streamable HTTP, the next-generation transport protocol designed to succeed SSE for MCP communications.
A quick refresher
Before we start, let’s make sure we all understand the following terms:

SSE enables servers to push data to clients over a persistent HTTP connection. In a typical setup for MCP, this involved using two distinct endpoints: one for the client to send requests to the server (usually via HTTP POST) and a separate endpoint where the client would establish an SSE connection (HTTP GET) to receive streaming responses and server-initiated messages.

MCP is an open standard designed to standardize how Large Language Models (LLMs) interact with external data sources, APIs and resources as agent tools, MCP aims to replace the current landscape of fragmented, custom integrations with a universal, standardized framework.

Streamable HTTP utilizes a single HTTP endpoint for both sending requests from the client to the server, and receiving responses and notifications from the server to the client.

aside_block
), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>

Step 1: Create an MCP server 
You need the following python packages installed in your virtual environment before proceeding. We will be using the uv tool in this blog.

code_block
<ListValue: [StructValue([(‘code’, ‘”beautifulsoup4==4.12.3",\r\n"google-adk==0.3.0",\r\n"html2text==2024.2.26",\r\n"mcp[cli]==1.5.0",\r\n"requests==2.32.3"’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2be0>)])]>

Here’s an explanation of the Python code server.py:

It creates an instance of an MCP server using FastMCP

It defines a tool called extract_wikipedia_article decorated with @mcp.tool

It configures an SSE transport mechanism SseServerTransport to enable real-time communication, typically for the MCP server interactions.

It creates a web application instance using the Starlette framework and defines two routes, message and sse.

You can read more about SSE transport protocol here.

code_block
<ListValue: [StructValue([(‘code’, ‘# File server.py\r\n\r\nimport requests\r\nfrom requests.exceptions import RequestException\r\nfrom bs4 import BeautifulSoup\r\nfrom html2text import html2text\r\n\r\nimport uvicorn\r\nfrom starlette.applications import Starlette\r\nfrom starlette.requests import Request\r\nfrom starlette.routing import Route, Mount\r\n\r\nfrom mcp.server.fastmcp import FastMCP\r\nfrom mcp.shared.exceptions import McpError\r\nfrom mcp.types import ErrorData, INTERNAL_ERROR, INVALID_PARAMS\r\nfrom mcp.server.sse import SseServerTransport\r\n\r\n# Create an MCP server instance with an identifier ("wiki")\r\nmcp = FastMCP("wiki")\r\n\r\n@mcp.tool()\r\ndef extract_wikipedia_article(url: str) -> str:\r\n """\r\n Retrieves and processes a Wikipedia article from the given URL, extracting\r\n the main content and converting it to Markdown format.\r\n\r\n Usage:\r\n extract_wikipedia_article("https://en.wikipedia.org/wiki/Gemini_(chatbot)")\r\n """\r\n try:\r\n if not url.startswith("http"):\r\n raise ValueError("URL must begin with http or https protocol.")\r\n\r\n response = requests.get(url, timeout=8)\r\n if response.status_code != 200:\r\n raise McpError(\r\n ErrorData(\r\n code=INTERNAL_ERROR,\r\n message=f"Unable to access the article. Server returned status: {response.status_code}"\r\n )\r\n )\r\n soup = BeautifulSoup(response.text, "html.parser")\r\n content_div = soup.find("div", {"id": "mw-content-text"})\r\n if not content_div:\r\n raise McpError(\r\n ErrorData(\r\n code=INVALID_PARAMS,\r\n message="The main article content section was not found at the specified Wikipedia URL."\r\n )\r\n )\r\n markdown_text = html2text(str(content_div))\r\n return markdown_text\r\n\r\n except Exception as e:\r\n raise McpError(ErrorData(code=INTERNAL_ERROR, message=f"An unexpected error occurred: {str(e)}")) from e\r\n\r\nsse = SseServerTransport("/messages/")\r\n\r\nasync def handle_sse(request: Request) -> None:\r\n _server = mcp._mcp_server\r\n async with sse.connect_sse(\r\n request.scope,\r\n request.receive,\r\n request._send,\r\n ) as (reader, writer):\r\n await _server.run(reader, writer, _server.create_initialization_options())\r\n\r\napp = Starlette(\r\n debug=True,\r\n routes=[\r\n Route("/sse", endpoint=handle_sse),\r\n Mount("/messages/", app=sse.handle_post_message),\r\n ],\r\n)\r\n\r\nif __name__ == "__main__":\r\n uvicorn.run(app, host="localhost", port=8001)’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2b20>)])]>

To start the server, you can run the command uv run server.py.
Bonus tip, to debug the server using MCP Inspector, execute the command uv run mcp dev server.py.

Step 2: Attach the MCP server while creating ADK agents
The following explains the Python code in the file agent.py:

Uses MCPToolset.from_server with SseServerParams to establish a SSE connection to a URI endpoint. For this demo we will use http://localhost:8001/sse, but in production this would be a remote server.

Create an ADK Agent and call get_tools_async to get the tools from the MCP server.

code_block
<ListValue: [StructValue([(‘code’, ‘# File agent.py\r\n\r\nimport asyncio\r\nimport json\r\nfrom typing import Any\r\n\r\nfrom dotenv import load_dotenv\r\nfrom google.adk.agents.llm_agent import LlmAgent\r\nfrom google.adk.artifacts.in_memory_artifact_service import (\r\n InMemoryArtifactService, # Optional\r\n)\r\nfrom google.adk.runners import Runner\r\nfrom google.adk.sessions import InMemorySessionService\r\nfrom google.adk.tools.mcp_tool.mcp_toolset import (\r\n MCPToolset,\r\n SseServerParams,\r\n)\r\nfrom google.genai import types\r\nfrom rich import print\r\nload_dotenv()\r\n\r\nasync def get_tools_async():\r\n """Gets tools from the File System MCP Server."""\r\n tools, exit_stack = await MCPToolset.from_server(\r\n connection_params=SseServerParams(\r\n url="http://localhost:8001/sse",\r\n )\r\n )\r\n print("MCP Toolset created successfully.")\r\n return tools, exit_stack\r\n\r\nasync def get_agent_async():\r\n """Creates an ADK Agent equipped with tools from the MCP Server."""\r\n tools, exit_stack = await get_tools_async()\r\n print(f"Fetched {len(tools)} tools from MCP server.")\r\n root_agent = LlmAgent(\r\n model="gemini-2.0-flash",\r\n name="assistant",\r\n instruction="""Help user extract and summarize the article from wikipedia link.\r\n Use the following tools to extract wikipedia article:\r\n – extract_wikipedia_article\r\n\r\n Once you retrieve the article, always summarize it in a few sentences for the user.\r\n """,\r\n tools=tools,\r\n )\r\n return root_agent, exit_stack\r\n\r\nroot_agent = get_agent_async()’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2940>)])]>

Step 3: Test your agent
We will use the ADK developer tool to test the agent.
Create the following directory structure:

code_block
<ListValue: [StructValue([(‘code’, ‘. # <–Your current directory\r\n├── adk-agent\r\n│ ├── __init__.py\r\n│ └── agent.py\r\n├── .env’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2910>)])]>

The content for __init__.py and .env are as follows:

code_block
<ListValue: [StructValue([(‘code’, ‘# .env\r\nGOOGLE_GENAI_USE_VERTEXAI="True"\r\nGOOGLE_CLOUD_PROJECT=<YOUR_PROJECT_ID>\r\nGOOGLE_CLOUD_LOCATION="us-central1"’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2a90>)])]>

code_block
<ListValue: [StructValue([(‘code’, ‘# __init__.py\r\nfrom . import agent’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c28b0>)])]>

Start the UI with the following command:

code_block
<ListValue: [StructValue([(‘code’, ‘uv run adk web’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2c70>)])]>

This will open up the ADK developer tool interface as shown below:

Streamable HTTP
It is worth noting that in March 2025, MCP released a new transport protocol called Streamable HTTP. The Streamable HTTP transport allows a server to function as an independent process managing multiple client connections via HTTP POST and GET requests. Servers can optionally implement Server-Sent Events (SSE) for streaming multiple messages, enabling support for basic MCP servers as well as more advanced servers with streaming and server-initiated communication.
The following code demonstrates how to implement a Streamable HTTP server, where the tool extract_wikipedia_article will return a dummy string to simplify the code.

code_block
<ListValue: [StructValue([(‘code’, ‘# File server.py\r\n\r\nimport contextlib\r\nimport logging\r\nfrom collections.abc import AsyncIterator\r\n\r\nimport anyio\r\nimport mcp.types as types\r\nfrom mcp.server.lowlevel import Server\r\nfrom mcp.server.streamable_http_manager import StreamableHTTPSessionManager\r\nfrom starlette.applications import Starlette\r\nfrom starlette.routing import Mount\r\nfrom starlette.types import Receive, Scope, Send\r\nimport uvicorn\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\n\r\napp = Server("mcp-streamable-http-stateless-demo")\r\n\r\n\r\n@app.call_tool()\r\nasync def call_tool(\r\n name: str, arguments: dict\r\n) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:\r\n # Check if the tool is extract-wikipedia-article\r\n if name == "extract-wikipedia-article":\r\n # Return dummy content for the Wikipedia article\r\n return [\r\n types.TextContent(\r\n type="text",\r\n text="This is the article …",\r\n )\r\n ]\r\n\r\n # For other tools, keep the existing notification logic\r\n ctx = app.request_context\r\n interval = arguments.get("interval", 1.0)\r\n count = arguments.get("count", 5)\r\n caller = arguments.get("caller", "unknown")\r\n\r\n # Send the specified number of notifications with the given interval\r\n for i in range(count):\r\n await ctx.session.send_log_message(\r\n level="info",\r\n data=f"Notification {i + 1}/{count} from caller: {caller}",\r\n logger="notification_stream",\r\n related_request_id=ctx.request_id,\r\n )\r\n if i < count – 1: # Don\’t wait after the last notification\r\n await anyio.sleep(interval)\r\n\r\n return [\r\n types.TextContent(\r\n type="text",\r\n text=(\r\n f"Sent {count} notifications with {interval}s interval"\r\n f" for caller: {caller}"\r\n ),\r\n )\r\n ]\r\n\r\n\r\n@app.list_tools()\r\nasync def list_tools() -> list[types.Tool]:\r\n return [\r\n types.Tool(\r\n name="extract-wikipedia-article",\r\n description=("Extracts the main content of a Wikipedia article"),\r\n inputSchema={\r\n "type": "object",\r\n "required": ["url"],\r\n "properties": {\r\n "url": {\r\n "type": "string",\r\n "description": "URL of the Wikipedia article to extract",\r\n },\r\n },\r\n },\r\n )\r\n ]\r\n\r\n\r\nsession_manager = StreamableHTTPSessionManager(\r\n app=app,\r\n event_store=None,\r\n stateless=True,\r\n)\r\n\r\n\r\nasync def handle_streamable_http(scope: Scope, receive: Receive, send: Send) -> None:\r\n await session_manager.handle_request(scope, receive, send)\r\n\r\n\r\n@contextlib.asynccontextmanager\r\nasync def lifespan(app: Starlette) -> AsyncIterator[None]:\r\n """Context manager for session manager."""\r\n async with session_manager.run():\r\n logger.info("Application started with StreamableHTTP session manager!")\r\n try:\r\n yield\r\n finally:\r\n logger.info("Application shutting down…")\r\n\r\n\r\napp = Starlette(\r\n debug=True,\r\n routes=[\r\n Mount("/mcp", app=handle_streamable_http),\r\n ],\r\n lifespan=lifespan,\r\n)\r\n\r\nif __name__ == "__main__":\r\n uvicorn.run(app, host="localhost", port=3000)’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2b50>)])]>

You can start the Streamable HTTP MCP server using by running the following:

code_block
<ListValue: [StructValue([(‘code’, ‘# Start the server\r\nuv run server.py’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2e4e0c2cd0>)])]>

To debug with MCP Inspector, select Streamable HTTP and fill in the MCP Server URL http://localhost:3000/mcp.

Authentication
For production deployments of MCP servers, robust authentication is a critical security consideration. As this field is under active development at the time of writing, we recommend referring to the MCP specification on Authentication for more information.
For an enterprise grade API governance system which, similar to MCP, can generate agent tools:

Apigee centralizes and manages any APIs, with full control, versioning, and governance 

API Hub organizes metadata for any API and documentation 

Application Integrations support many existing API connections with user access control support 

ADK supports these Google Cloud Managed Tools with about the same number of lines of code

Get started 
To get started today, read the documentation for ADK. You can create your own Agent with access to available MCP servers in the open community with ADK.

AI Summary and Description: Yes

Summary: The text discusses the implementation and integration of the Model Context Protocol (MCP) for AI-powered agents, highlighting its standardized approach to accessing external data sources and improving communication with these agents. This is particularly relevant for developers in AI, cloud, and infrastructure security domains, as it outlines technologies that can enhance the efficacy and security of large language models (LLMs) and their applications.

Detailed Description:
The text provides an in-depth guide on utilizing the Model Context Protocol (MCP) to build sophisticated AI agents that can efficiently retrieve and utilize external information. Below are the major points covered:

– **Model Context Protocol (MCP)**: A framework designed to standardize the interaction of large language models (LLMs) with external data sources and APIs.
– It replaces fragmented integrations with uniform methods, making it easier to deploy sophisticated agents.

– **Integration with Google’s Agent Development Kit (ADK)**: The guide addresses the integration challenges when connecting ADK agents to MCP servers, particularly those hosted externally.

– **Transport Mechanisms**:
– The text introduces **Server-Sent Events (SSE)** as the initial communication method used for establishing a persistent HTTP connection, allowing streaming responses from the server.
– An upcoming transport protocol, **Streamable HTTP**, is briefly mentioned, promising enhanced capabilities for client-server communication by merging request and response handling into a single endpoint.

– **Implementation Steps**:
– **Creating an MCP Server**: Developers are guided on setting up an MCP server using Python, including required libraries and the foundational code for server operations.
– **Developing ADK Agents**: Instructions to link agents to MCP servers using SSE connection for tool retrieval are detailed.
– **Testing and Running the Agent**: The text outlines how to structure project directories and run testing commands to ensure proper agent functionality.

– **Security Considerations**: It emphasizes the need for robust authentication in production deployments to secure the MCP servers, referring to additional documentation for enterprise-grade API governance.

– **Practical Implications**:
– The standardized MCP approach can vastly simplify the complexities involved in enabling LLMs to interact with varied external data sources.
– With focus on integration and transport mechanisms, the content serves as a technical foundation for developers looking to harness the potential of generative AI tools securely.

Overall, this text is a comprehensive blueprint for developers aiming to leverage AI capabilities securely and efficiently in their applications by adhering to standardized communication protocols and security measures.