Tag: Gemma 3

  • Docker: LoRA Explained: Faster, More Efficient Fine-Tuning with Docker

    Source URL: https://www.docker.com/blog/lora-explained/ Source: Docker Title: LoRA Explained: Faster, More Efficient Fine-Tuning with Docker Feedly Summary: Fine-tuning a language model doesn’t have to be daunting. In our previous post on fine-tuning models with Docker Offload and Unsloth, we walked through how to train small, local models efficiently using Docker’s familiar workflows. This time, we’re narrowing…

  • Docker: Fine-Tuning Local Models with Docker Offload and Unsloth

    Source URL: https://www.docker.com/blog/fine-tuning-models-with-offload-and-unsloth/ Source: Docker Title: Fine-Tuning Local Models with Docker Offload and Unsloth Feedly Summary: I’ve been experimenting with local models for a while now, and the progress in making them accessible has been exciting. Initial experiences are often fantastic, many models, like Gemma 3 270M, are lightweight enough to run on common hardware.…

  • Cloud Blog: Agent Factory Recap: Deep Dive into Gemini CLI with Taylor Mullen

    Source URL: https://cloud.google.com/blog/topics/developers-practitioners/agent-factory-recap-deep-dive-into-gemini-cli-with-taylor-mullen/ Source: Cloud Blog Title: Agent Factory Recap: Deep Dive into Gemini CLI with Taylor Mullen Feedly Summary: In the latest episode of the Agent Factory podcast, Amit Miraj and I took a deep dive into the Gemini CLI. We were joined by the creator of the Gemini CLI, Taylor Mullen, who shared…

  • Simon Willison’s Weblog: Introducing EmbeddingGemma

    Source URL: https://simonwillison.net/2025/Sep/4/embedding-gemma/#atom-everything Source: Simon Willison’s Weblog Title: Introducing EmbeddingGemma Feedly Summary: Introducing EmbeddingGemma Brand new open weights (under the slightly janky Gemma license) 308M parameter embedding model from Google: Based on the Gemma 3 architecture, EmbeddingGemma is trained on 100+ languages and is small enough to run on less than 200MB of RAM with…

  • The Register: Little LLM on the RAM: Google’s Gemma 270M hits the scene

    Source URL: https://www.theregister.com/2025/08/15/little_llm_on_the_ram/ Source: The Register Title: Little LLM on the RAM: Google’s Gemma 270M hits the scene Feedly Summary: A tiny model trained on trillions of tokens, ready for specialized tasks Google has unveiled a pint-sized new addition to its “open" large language model lineup: Gemma 3 270M.… AI Summary and Description: Yes Summary:…

  • Slashdot: Google Releases Pint-Size Gemma Open AI Model

    Source URL: https://tech.slashdot.org/story/25/08/14/2150230/google-releases-pint-size-gemma-open-ai-model?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Google Releases Pint-Size Gemma Open AI Model Feedly Summary: AI Summary and Description: Yes Summary: Google has introduced the Gemma 3 270M, a compact AI model optimized for local deployment, which offers significant advantages in terms of privacy and efficiency. While it may not match the performance of larger…

  • Simon Willison’s Weblog: Introducing Gemma 3 270M: The compact model for hyper-efficient AI

    Source URL: https://simonwillison.net/2025/Aug/14/gemma-3-270m/#atom-everything Source: Simon Willison’s Weblog Title: Introducing Gemma 3 270M: The compact model for hyper-efficient AI Feedly Summary: Introducing Gemma 3 270M: The compact model for hyper-efficient AI New from Google: Gemma 3 270M, a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring…

  • Cloud Blog: Announcements for AI Hypercomputer: The latest infrastructure news for ML practitioners

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/q2-2025-ai-hypercomputer-updates/ Source: Cloud Blog Title: Announcements for AI Hypercomputer: The latest infrastructure news for ML practitioners Feedly Summary: Curious about the latest in AI infrastructure from Google Cloud? Every three months we share a roundup of the latest AI Hypercomputer news, resources, events, learning opportunities, and more. Read on to learn new ways…