Tag: edge-llms

  • Simon Willison’s Weblog: Open WebUI

    Source URL: https://simonwillison.net/2024/Dec/27/open-webui/#atom-everything Source: Simon Willison’s Weblog Title: Open WebUI Feedly Summary: Open WebUI I tried out this open source (MIT licensed, JavaScript and Python) localhost UI for accessing LLMs today for the first time. It’s very nicely done. I ran it with uvx like this: uvx –python 3.11 open-webui serve On first launch it…

  • Simon Willison’s Weblog: I can now run a GPT-4 class model on my laptop

    Source URL: https://simonwillison.net/2024/Dec/9/llama-33-70b/ Source: Simon Willison’s Weblog Title: I can now run a GPT-4 class model on my laptop Feedly Summary: Meta’s new Llama 3.3 70B is a genuinely GPT-4 class Large Language Model that runs on my laptop. Just 20 months ago I was amazed to see something that felt GPT-3 class run on…

  • Simon Willison’s Weblog: Meta AI release Llama 3.3

    Source URL: https://simonwillison.net/2024/Dec/6/llama-33/#atom-everything Source: Simon Willison’s Weblog Title: Meta AI release Llama 3.3 Feedly Summary: Meta AI release Llama 3.3 This new Llama-3.3-70B-Instruct model from Meta AI makes some bold claims: This model delivers similar performance to Llama 3.1 405B with cost effective inference that’s feasible to run locally on common developer workstations. I have…

  • Simon Willison’s Weblog: SmolVLM – small yet mighty Vision Language Model

    Source URL: https://simonwillison.net/2024/Nov/28/smolvlm/#atom-everything Source: Simon Willison’s Weblog Title: SmolVLM – small yet mighty Vision Language Model Feedly Summary: SmolVLM – small yet mighty Vision Language Model I’ve been having fun playing with this new vision model from the Hugging Face team behind SmolLM. They describe it as: […] a 2B VLM, SOTA for its memory…

  • Simon Willison’s Weblog: Quantization matters

    Source URL: https://simonwillison.net/2024/Nov/23/quantization-matters/#atom-everything Source: Simon Willison’s Weblog Title: Quantization matters Feedly Summary: Quantization matters What impact does quantization have on the performance of an LLM? been wondering about this for quite a while, now here are numbers from Paul Gauthier. He ran differently quantized versions of Qwen 2.5 32B Instruct through his Aider code editing…

  • Simon Willison’s Weblog: Ollama: Llama 3.2 Vision

    Source URL: https://simonwillison.net/2024/Nov/13/ollama-llama-vision/#atom-everything Source: Simon Willison’s Weblog Title: Ollama: Llama 3.2 Vision Feedly Summary: Ollama: Llama 3.2 Vision Ollama released version 0.4 last week with support for Meta’s first Llama vision model, Llama 3.2. If you have Ollama installed you can fetch the 11B model (7.9 GB) like this: ollama pull llama3.2-vision Or the larger…

  • Simon Willison’s Weblog: Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac

    Source URL: https://simonwillison.net/2024/Nov/12/qwen25-coder/ Source: Simon Willison’s Weblog Title: Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac Feedly Summary: There’s a whole lot of buzz around the new Qwen2.5-Coder Series of open source (Apache 2.0 licensed) LLM releases from Alibaba’s Qwen research team. On first impression it looks like the buzz…

  • Simon Willison’s Weblog: Everything I’ve learned so far about running local LLMs

    Source URL: https://simonwillison.net/2024/Nov/10/running-llms/#atom-everything Source: Simon Willison’s Weblog Title: Everything I’ve learned so far about running local LLMs Feedly Summary: Everything I’ve learned so far about running local LLMs Chris Wellons shares detailed notes on his experience running local LLMs on Windows – though most of these tips apply to other operating systems as well. This…

  • Simon Willison’s Weblog: Nous Hermes 3

    Source URL: https://simonwillison.net/2024/Nov/4/nous-hermes-3/#atom-everything Source: Simon Willison’s Weblog Title: Nous Hermes 3 Feedly Summary: Nous Hermes 3 The Nous Hermes family of fine-tuned models have a solid reputation. Their most recent release came out in August, based on Meta’s Llama 3.1: Our training data aggressively encourages the model to follow the system and instruction prompts exactly…