Tag: Huggingface
-
Simon Willison’s Weblog: SmolVLM – small yet mighty Vision Language Model
Source URL: https://simonwillison.net/2024/Nov/28/smolvlm/#atom-everything Source: Simon Willison’s Weblog Title: SmolVLM – small yet mighty Vision Language Model Feedly Summary: SmolVLM – small yet mighty Vision Language Model I’ve been having fun playing with this new vision model from the Hugging Face team behind SmolLM. They describe it as: […] a 2B VLM, SOTA for its memory…
-
Hacker News: Full LLM training and evaluation toolkit
Source URL: https://github.com/huggingface/smollm Source: Hacker News Title: Full LLM training and evaluation toolkit Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces SmolLM2, a family of compact language models with varying parameters designed for lightweight, on-device applications, and details on how they can be utilized in different scenarios. Such advancements in AI…
-
Simon Willison’s Weblog: SmolLM2
Source URL: https://simonwillison.net/2024/Nov/2/smollm2/#atom-everything Source: Simon Willison’s Weblog Title: SmolLM2 Feedly Summary: SmolLM2 New from Loubna Ben Allal and her research team at Hugging Face: SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough…
-
Hacker News: 1-Click Models Powered by Hugging Face
Source URL: https://www.digitalocean.com/blog/one-click-models-on-do-powered-by-huggingface Source: Hacker News Title: 1-Click Models Powered by Hugging Face Feedly Summary: Comments AI Summary and Description: Yes Summary: DigitalOcean has launched a new 1-Click Model deployment service powered by Hugging Face, termed HUGS on DO. This feature allows users to quickly deploy popular generative AI models on DigitalOcean GPU Droplets, aiming…
-
Simon Willison’s Weblog: lm.rs: run inference on Language Models locally on the CPU with Rust
Source URL: https://simonwillison.net/2024/Oct/11/lmrs/ Source: Simon Willison’s Weblog Title: lm.rs: run inference on Language Models locally on the CPU with Rust Feedly Summary: lm.rs: run inference on Language Models locally on the CPU with Rust Impressive new LLM inference implementation in Rust by Samuel Vitorino. I tried it just now on an M2 Mac with 64GB…