Tag: local
-
Docker: LoRA Explained: Faster, More Efficient Fine-Tuning with Docker
Source URL: https://www.docker.com/blog/lora-explained/ Source: Docker Title: LoRA Explained: Faster, More Efficient Fine-Tuning with Docker Feedly Summary: Fine-tuning a language model doesn’t have to be daunting. In our previous post on fine-tuning models with Docker Offload and Unsloth, we walked through how to train small, local models efficiently using Docker’s familiar workflows. This time, we’re narrowing…
-
Cloud Blog: Introducing Gemini Enterprise
Source URL: https://cloud.google.com/blog/products/ai-machine-learning/introducing-gemini-enterprise/ Source: Cloud Blog Title: Introducing Gemini Enterprise Feedly Summary: (Editor’s note: This is a shortened version of remarks delivered by Thomas Kurian announcing Gemini Enterprise at an event today)AI is presenting a once-in-a-generation opportunity to transform how you work, how you run your business, and what you build for your customers. But…
-
Docker: Fine-Tuning Local Models with Docker Offload and Unsloth
Source URL: https://www.docker.com/blog/fine-tuning-models-with-offload-and-unsloth/ Source: Docker Title: Fine-Tuning Local Models with Docker Offload and Unsloth Feedly Summary: I’ve been experimenting with local models for a while now, and the progress in making them accessible has been exciting. Initial experiences are often fantastic, many models, like Gemma 3 270M, are lightweight enough to run on common hardware.…