Source URL: https://simonwillison.net/2025/Mar/26/function-calling-with-gemma/#atom-everything
Source: Simon Willison’s Weblog
Title: Function calling with Gemma
Feedly Summary: Function calling with Gemma
Google’s Gemma 3 model (the 27B variant is particularly capable, I’ve been trying it out via Ollama) supports function calling exclusively through prompt engineering. The official documentation describes two recommended prompts – both of them suggest that the tool definitions are passed in as JSON schema, but the way the model should request tool executions differs.
The first prompt uses Python-style function calling syntax:
You have access to functions. If you decide to invoke any of the function(s),
you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2…), func_name2(params)]
You SHOULD NOT include any other text in the response if you call a function
(Always love seeing CAPITALS for emphasis in prompts, makes me wonder if they proved to themselves that capitalization makes a difference in this case.)
The second variant uses JSON instead:
You have access to functions. If you decide to invoke any of the function(s),
you MUST put it in the format of {“name": function name, "parameters": dictionary of argument name and its value}
You SHOULD NOT include any other text in the response if you call a function
This is a neat illustration of the fact that all of these fancy tool using LLMs are still using effectively the same pattern as was described in the ReAct paper back in November 2022. Here’s my implementation of that pattern from March 2023.
Via Hacker News
Tags: prompt-engineering, google, generative-ai, llm-tool-use, gemma, ai, llms
AI Summary and Description: Yes
Summary: The text discusses the function calling capabilities of Google’s Gemma 3 model, emphasizing its reliance on prompt engineering and showcasing two distinct formats for invoking functions—one using Python syntax and the other in JSON format. This is significant for AI security professionals as it illustrates the evolution of prompt engineering techniques in generative AI, particularly in LLMs (large language models).
Detailed Description: The content focuses on the function calling mechanism of Google’s Gemma 3 model, specifically the 27B variant. The insights provided are crucial for understanding how generative AI can interact with functions through carefully crafted prompts.
Key points include:
– **Function Calling Syntax:**
– Two primary syntaxes for invoking functions in the Gemma 3 model are presented:
1. **Python-style Function Calling:**
– Format: `[func_name1(params_name1=params_value1, params_name2=params_value2…), func_name2(params)]`
– Clear instruction to avoid unnecessary text when initiating a function call, which can reduce misinterpretation during execution.
2. **JSON Format:**
– Format: `{“name”: function name, “parameters”: dictionary of argument name and its value}`
– Again emphasizes the restriction of including only the JSON object during function invocation.
– **Historical Context:**
– The author draws a connection between current practices and the patterns outlined in the ReAct paper from November 2022, underscoring a continuity in the approach to tool utilization in LLMs.
– **Practical Implication for Professionals:**
– Understanding these formats is essential for developers and security professionals working with AI to create secure, efficient prompts that leverage the full capabilities of models like Gemma.
– As usage of generative AI rises, ensuring proper function invocation syntax becomes critical for maintaining both functionality and security.
This discussion on prompt engineering elucidates best practices and deepens the understanding of how LLMs operate, providing foundational knowledge crucial for AI security, compliance, and infrastructure professionals.