Tag: hugging

  • Simon Willison’s Weblog: Load Llama-3.2 WebGPU in your browser from a local folder

    Source URL: https://simonwillison.net/2025/Sep/8/webgpu-local-folder/#atom-everything Source: Simon Willison’s Weblog Title: Load Llama-3.2 WebGPU in your browser from a local folder Feedly Summary: Load Llama-3.2 WebGPU in your browser from a local folder Inspired by a comment on Hacker News I decided to see if it was possible to modify the transformers.js-examples/tree/main/llama-3.2-webgpu Llama 3.2 chat demo (online here,…

  • Slashdot: Switzerland Releases Open-Source AI Model Built For Privacy

    Source URL: https://news.slashdot.org/story/25/09/03/2125252/switzerland-releases-open-source-ai-model-built-for-privacy?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Switzerland Releases Open-Source AI Model Built For Privacy Feedly Summary: AI Summary and Description: Yes Summary: Switzerland’s launch of Apertus, a fully open-source multilingual LLM, emphasizes transparency and privacy in AI development. By providing open access to the model’s components and adhering to stringent Swiss data protection laws, Apertus…

  • Unit 42: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust

    Source URL: https://unit42.paloaltonetworks.com/model-namespace-reuse/ Source: Unit 42 Title: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust Feedly Summary: Model namespace reuse is a potential security risk in the AI supply chain. Attackers can misuse platforms like Hugging Face for remote code execution. The post Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model…

  • Simon Willison’s Weblog: llama.cpp guide: running gpt-oss with llama.cpp

    Source URL: https://simonwillison.net/2025/Aug/19/gpt-oss-with-llama-cpp/ Source: Simon Willison’s Weblog Title: llama.cpp guide: running gpt-oss with llama.cpp Feedly Summary: llama.cpp guide: running gpt-oss with llama.cpp Really useful official guide to running the OpenAI gpt-oss models using llama-server from llama.cpp – which provides an OpenAI-compatible localhost API and a neat web interface for interacting with the models. TLDR version…

  • Cloud Blog: An efficient path to production AI: Kakao’s journey with JAX and Cloud TPUs

    Source URL: https://cloud.google.com/blog/products/infrastructure-modernization/kakaos-journey-with-jax-and-cloud-tpus/ Source: Cloud Blog Title: An efficient path to production AI: Kakao’s journey with JAX and Cloud TPUs Feedly Summary: When your messaging platform serves 49 million people – 93% of South Korea’s population – every technical decision carries enormous weight. The engineering team at Kakao faced exactly this challenge when their existing…

  • Simon Willison’s Weblog: Introducing Gemma 3 270M: The compact model for hyper-efficient AI

    Source URL: https://simonwillison.net/2025/Aug/14/gemma-3-270m/#atom-everything Source: Simon Willison’s Weblog Title: Introducing Gemma 3 270M: The compact model for hyper-efficient AI Feedly Summary: Introducing Gemma 3 270M: The compact model for hyper-efficient AI New from Google: Gemma 3 270M, a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring…