Tag: Hugging Face

  • Docker: Tooling ≠ Glue: Why changing AI workflows still feels like duct tape

    Source URL: https://www.docker.com/blog/why-changing-ai-workflows-still-feels-like-duct-tape/ Source: Docker Title: Tooling ≠ Glue: Why changing AI workflows still feels like duct tape Feedly Summary: There’s a weird contradiction in modern AI development. We have better tools than ever. We’re building smarter systems with cleaner abstractions. And yet, every time you try to swap out a component in your stack,…

  • Simon Willison’s Weblog: Qwen3-4B Instruct and Thinking

    Source URL: https://simonwillison.net/2025/Aug/6/qwen3-4b-instruct-and-thinking/ Source: Simon Willison’s Weblog Title: Qwen3-4B Instruct and Thinking Feedly Summary: Qwen3-4B Instruct and Thinking Yet another interesting model from Qwen—these are tiny compared to their other recent releases (just 4B parameters, 7.5GB on Hugging Face and even smaller when quantized) but with a 262,144 context length, which Qwen suggest is essential…

  • Cisco Security Blog: Cisco’s Foundation AI Advances AI Supply Chain Security With Hugging Face

    Source URL: https://feedpress.me/link/23535/17111768/ciscos-foundation-ai-advances-ai-supply-chain-security-with-hugging-face Source: Cisco Security Blog Title: Cisco’s Foundation AI Advances AI Supply Chain Security With Hugging Face Feedly Summary: Cisco’s Foundation AI is partnering with Hugging Face, bringing together the world’s leading AI model hub with Cisco’s security expertise. AI Summary and Description: Yes Summary: Cisco’s Foundation AI collaboration with Hugging Face exemplifies…

  • Simon Willison’s Weblog: My 2.5 year old laptop can write Space Invaders in JavaScript now

    Source URL: https://simonwillison.net/2025/Jul/29/space-invaders/ Source: Simon Willison’s Weblog Title: My 2.5 year old laptop can write Space Invaders in JavaScript now Feedly Summary: I wrote about the new GLM-4.5 model family yesterday – new open weight (MIT licensed) models from Z.ai in China which their benchmarks claim score highly in coding even against models such as…

  • Simon Willison’s Weblog: TimeScope: How Long Can Your Video Large Multimodal Model Go?

    Source URL: https://simonwillison.net/2025/Jul/23/timescope/#atom-everything Source: Simon Willison’s Weblog Title: TimeScope: How Long Can Your Video Large Multimodal Model Go? Feedly Summary: TimeScope: How Long Can Your Video Large Multimodal Model Go? New open source benchmark for evaluating vision LLMs on how well they handle long videos: TimeScope probes the limits of long-video capabilities by inserting several…

  • Simon Willison’s Weblog: Qwen3-Coder: Agentic Coding in the World

    Source URL: https://simonwillison.net/2025/Jul/22/qwen3-coder/ Source: Simon Willison’s Weblog Title: Qwen3-Coder: Agentic Coding in the World Feedly Summary: Qwen3-Coder: Agentic Coding in the World It turns out that as I was typing up my notes on Qwen3-235B-A22B-Instruct-2507 the Qwen team were unleashing something much bigger: Today, we’re announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder…

  • Simon Willison’s Weblog: Qwen/Qwen3-235B-A22B-Instruct-2507

    Source URL: https://simonwillison.net/2025/Jul/22/qwen3-235b-a22b-instruct-2507/#atom-everything Source: Simon Willison’s Weblog Title: Qwen/Qwen3-235B-A22B-Instruct-2507 Feedly Summary: Qwen/Qwen3-235B-A22B-Instruct-2507 Significant new model release from Qwen, published yesterday without much fanfare. This is a follow-up to their April release of the full Qwen 3 model family, which included a Qwen3-235B-A22B model which could handle both reasoning and non-reasoning prompts (via a /no_think toggle).…