Tag: local inference

  • Hacker News: I can now run a GPT-4 class model on my laptop

    Source URL: https://simonwillison.net/2024/Dec/9/llama-33-70b/ Source: Hacker News Title: I can now run a GPT-4 class model on my laptop Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses the advances in consumer-grade hardware capable of running powerful Large Language Models (LLMs), specifically highlighting Meta’s Llama 3.3 model’s performance on a MacBook Pro M2.…

  • Simon Willison’s Weblog: mistral.rs

    Source URL: https://simonwillison.net/2024/Oct/19/mistralrs/#atom-everything Source: Simon Willison’s Weblog Title: mistral.rs Feedly Summary: mistral.rs Here’s an LLM inference library written in Rust. It’s not just for that one family of models – like how llama.cpp has grown beyond Llama, mistral.rs has grown beyond Mistral. This is the first time I’ve been able to run the Llama 3.2…

  • Hacker News: Microsoft BitNet: inference framework for 1-bit LLMs

    Source URL: https://github.com/microsoft/BitNet Source: Hacker News Title: Microsoft BitNet: inference framework for 1-bit LLMs Feedly Summary: Comments AI Summary and Description: Yes Summary: The text describes “bitnet.cpp,” a specialized inference framework for 1-bit large language models (LLMs), specifically highlighting its performance enhancements, optimized kernel support, and installation instructions. This framework is poised to significantly influence…

  • Simon Willison’s Weblog: lm.rs: run inference on Language Models locally on the CPU with Rust

    Source URL: https://simonwillison.net/2024/Oct/11/lmrs/ Source: Simon Willison’s Weblog Title: lm.rs: run inference on Language Models locally on the CPU with Rust Feedly Summary: lm.rs: run inference on Language Models locally on the CPU with Rust Impressive new LLM inference implementation in Rust by Samuel Vitorino. I tried it just now on an M2 Mac with 64GB…