Tag: hugging
- 
		
		
		Simon Willison’s Weblog: Introducing Gemma 3n: The developer guideSource URL: https://simonwillison.net/2025/Jun/26/gemma-3n/ Source: Simon Willison’s Weblog Title: Introducing Gemma 3n: The developer guide Feedly Summary: Introducing Gemma 3n: The developer guide Extremely consequential new open weights model release from Google today: Multimodal by design: Gemma 3n natively supports image, audio, video, and text inputs and text outputs. Optimized for on-device: Engineered with a focus… 
- 
		
		
		Simon Willison’s Weblog: model.yamlSource URL: https://simonwillison.net/2025/Jun/21/model-yaml/#atom-everything Source: Simon Willison’s Weblog Title: model.yaml Feedly Summary: model.yaml From their GitHub repo it looks like this effort quietly launched a couple of months ago, driven by the LM Studio team. Their goal is to specify an “open standard for defining crossplatform, composable AI models". A model can be defined using a… 
- 
		
		
		Simon Willison’s Weblog: Mistral-Small 3.2Source URL: https://simonwillison.net/2025/Jun/20/mistral-small-32/ Source: Simon Willison’s Weblog Title: Mistral-Small 3.2 Feedly Summary: Mistral-Small 3.2 Released on Hugging Face a couple of hours ago, so far there aren’t any quantizations to run it on a Mac but I’m sure those will emerge pretty quickly. This is a minor bump to Mistral Small 3.1, one of my… 
- 
		
		
		Docker: How to Build, Run, and Package AI Models Locally with Docker Model RunnerSource URL: https://www.docker.com/blog/how-to-build-run-and-package-ai-models-locally-with-docker-model-runner/ Source: Docker Title: How to Build, Run, and Package AI Models Locally with Docker Model Runner Feedly Summary: Introduction As a Senior DevOps Engineer and Docker Captain, I’ve helped build AI systems for everything from retail personalization to medical imaging. One truth stands out: AI capabilities are core to modern infrastructure. This… 
- 
		
		
		Simon Willison’s Weblog: Magistral — the first reasoning model by Mistral AISource URL: https://simonwillison.net/2025/Jun/10/magistral/ Source: Simon Willison’s Weblog Title: Magistral — the first reasoning model by Mistral AI Feedly Summary: Magistral — the first reasoning model by Mistral AI Mistral’s first reasoning model is out today, in two sizes. There’s a 24B Apache 2 licensed open-weights model called Magistral Small (actually Magistral-Small-2506), and a larger API-only… 
- 
		
		
		Simon Willison’s Weblog: Qwen3 EmbeddingSource URL: https://simonwillison.net/2025/Jun/8/qwen3-embedding/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3 Embedding Feedly Summary: Qwen3 Embedding New family of embedding models from Qwen, in three sizes: 0.6B, 4B, 8B – and two categories: Text Embedding and Text Reranking. The full collection can be browsed on Hugging Face. The smallest available model is the 0.6B Q8 one, which… 
- 
		
		
		Simon Willison’s Weblog: Comma v0.1 1T and 2T – 7B LLMs trained on openly licensed textSource URL: https://simonwillison.net/2025/Jun/7/comma/#atom-everything Source: Simon Willison’s Weblog Title: Comma v0.1 1T and 2T – 7B LLMs trained on openly licensed text Feedly Summary: It’s been a long time coming, but we finally have some promising LLMs to try out which are trained entirely on openly licensed text! EleutherAI released the Pile four and a half… 
- 
		
		
		Simon Willison’s Weblog: The last year six months in LLMs, illustrated by pelicans on bicyclesSource URL: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#atom-everything Source: Simon Willison’s Weblog Title: The last year six months in LLMs, illustrated by pelicans on bicycles Feedly Summary: I presented an invited keynote at the AI Engineer World’s Fair in San Francisco this week. This is my third time speaking at the event – here’s my talks from October 2023 and… 
- 
		
		
		Cloud Blog: Accelerate your gen AI: Deploy Llama4 & DeepSeek on AI Hypercomputer with new recipesSource URL: https://cloud.google.com/blog/products/ai-machine-learning/deploying-llama4-and-deepseek-on-ai-hypercomputer/ Source: Cloud Blog Title: Accelerate your gen AI: Deploy Llama4 & DeepSeek on AI Hypercomputer with new recipes Feedly Summary: The pace of innovation in open-source AI is breathtaking, with models like Meta’s Llama4 and DeepSeek AI’s DeepSeek. However, deploying and optimizing large, powerful models can be complex and resource-intensive. Developers and… 
- 
		
		
		Cloud Blog: Building a Production Multimodal Fine-Tuning PipelineSource URL: https://cloud.google.com/blog/topics/developers-practitioners/building-a-production-multimodal-fine-tuning-pipeline/ Source: Cloud Blog Title: Building a Production Multimodal Fine-Tuning Pipeline Feedly Summary: Looking to fine-tune multimodal AI models for your specific domain but facing infrastructure and implementation challenges? This guide demonstrates how to overcome the multimodal implementation gap using Google Cloud and Axolotl, with a complete hands-on example fine-tuning Gemma 3 on…