Tag: function calling

  • AWS News Blog: AWS announces Pixtral Large 25.02 model in Amazon Bedrock serverless

    Source URL: https://aws.amazon.com/blogs/aws/aws-announces-pixtral-large-25-02-model-in-amazon-bedrock-serverless/ Source: AWS News Blog Title: AWS announces Pixtral Large 25.02 model in Amazon Bedrock serverless Feedly Summary: Mistral AI’s multimodal model, Pixtral Large 25.02, is now available in Amazon Bedrock as a fully managed, serverless offering with cross-Region inference support, multilingual capabilities, and a 128K context window that can process images alongside…

  • Simon Willison’s Weblog: Function calling with Gemma

    Source URL: https://simonwillison.net/2025/Mar/26/function-calling-with-gemma/#atom-everything Source: Simon Willison’s Weblog Title: Function calling with Gemma Feedly Summary: Function calling with Gemma Google’s Gemma 3 model (the 27B variant is particularly capable, I’ve been trying it out via Ollama) supports function calling exclusively through prompt engineering. The official documentation describes two recommended prompts – both of them suggest that…

  • Hacker News: Gemma3 Function Calling

    Source URL: https://ai.google.dev/gemma/docs/capabilities/function-calling Source: Hacker News Title: Gemma3 Function Calling Feedly Summary: Comments AI Summary and Description: Yes Summary: The provided text discusses function calling with a generative AI model named Gemma, including its structure, usage, and recommendations for code execution. This information is critical for professionals working with AI systems, particularly in understanding how…

  • Simon Willison’s Weblog: OpenAI platform: o1-pro

    Source URL: https://simonwillison.net/2025/Mar/19/o1-pro/ Source: Simon Willison’s Weblog Title: OpenAI platform: o1-pro Feedly Summary: OpenAI platform: o1-pro OpenAI have a new most-expensive model: o1-pro can now be accessed through their API at a hefty $150/million tokens for input and $600/million tokens for output. That’s 10x the price of their o1 and o1-preview models and a full…

  • Simon Willison’s Weblog: Notes on Google’s Gemma 3

    Source URL: https://simonwillison.net/2025/Mar/12/gemma-3/ Source: Simon Willison’s Weblog Title: Notes on Google’s Gemma 3 Feedly Summary: Google’s Gemma team released an impressive new model today (under their not-open-source Gemma license). Gemma 3 comes in four sizes – 1B, 4B, 12B, and 27B – and while 1B is text-only the larger three models are all multi-modal for…

  • Simon Willison’s Weblog: Notes on Google’s Gemma 3

    Source URL: https://simonwillison.net/2025/Mar/12/notes-on-googles-gemma-3/ Source: Simon Willison’s Weblog Title: Notes on Google’s Gemma 3 Feedly Summary: Google’s Gemma team released an impressive new model today (under their not-open-source Gemma license). Gemma 3 comes in four sizes – 1B, 4B, 12B, and 27B – and while 1B is text-only the larger three models are all multi-modal for…

  • Hacker News: Mistral Small 3

    Source URL: https://mistral.ai/news/mistral-small-3/ Source: Hacker News Title: Mistral Small 3 Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces Mistral Small 3, a new 24B-parameter model optimized for latency, designed for generative AI tasks. It highlights the model’s competitive performance compared to larger models, its suitability for local deployment, and its potential…