Tag: Apache 2
- 
		
		
		Simon Willison’s Weblog: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M TokensSource URL: https://simonwillison.net/2025/Jan/26/qwen25-1m/ Source: Simon Willison’s Weblog Title: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens Feedly Summary: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens Very significant new release from Alibaba’s Qwen team. Their openly licensed (sometimes Apache 2, sometimes Qwen license, I’ve had trouble keeping… 
- 
		
		
		Hacker News: SP1: A performant, 100% open-source, contributor-friendly zkVMSource URL: https://blog.succinct.xyz/introducing-sp1/ Source: Hacker News Title: SP1: A performant, 100% open-source, contributor-friendly zkVM Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces the Succinct Processor 1 (SP1), a next-generation zero-knowledge virtual machine (zkVM) that enhances transaction execution speed and efficiency, specifically for Rust and LLVM-compiled languages. SP1 is designed to be… 
- 
		
		
		Hacker News: Alibaba releases an ‘open’ challenger to OpenAI’s O1 reasoning modelSource URL: https://techcrunch.com/2024/11/27/alibaba-releases-an-open-challenger-to-openais-o1-reasoning-model/ Source: Hacker News Title: Alibaba releases an ‘open’ challenger to OpenAI’s O1 reasoning model Feedly Summary: Comments AI Summary and Description: Yes Summary: The arrival of the QwQ-32B-Preview model from Alibaba’s Qwen team introduces a significant competitor to OpenAI’s offerings in the AI reasoning space. With its innovative self-fact-checking capabilities and ability… 
- 
		
		
		Simon Willison’s Weblog: SmolVLM – small yet mighty Vision Language ModelSource URL: https://simonwillison.net/2024/Nov/28/smolvlm/#atom-everything Source: Simon Willison’s Weblog Title: SmolVLM – small yet mighty Vision Language Model Feedly Summary: SmolVLM – small yet mighty Vision Language Model I’ve been having fun playing with this new vision model from the Hugging Face team behind SmolLM. They describe it as: […] a 2B VLM, SOTA for its memory… 
- 
		
		
		Simon Willison’s Weblog: Qwen2.5-Coder-32B is an LLM that can code well that runs on my MacSource URL: https://simonwillison.net/2024/Nov/12/qwen25-coder/ Source: Simon Willison’s Weblog Title: Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac Feedly Summary: There’s a whole lot of buzz around the new Qwen2.5-Coder Series of open source (Apache 2.0 licensed) LLM releases from Alibaba’s Qwen research team. On first impression it looks like the buzz… 
- 
		
		
		Hacker News: Hyperlight: Virtual machine-based security for functions at scaleSource URL: https://opensource.microsoft.com/blog/2024/11/07/introducing-hyperlight-virtual-machine-based-security-for-functions-at-scale/ Source: Hacker News Title: Hyperlight: Virtual machine-based security for functions at scale Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text discusses the launch of Hyperlight, a new open-source Rust library by Microsoft’s Azure Core Upstream team. Hyperlight enables the execution of small, embedded functions in a secure and efficient… 
- 
		
		
		Hacker News: SmolLM2Source URL: https://simonwillison.net/2024/Nov/2/smollm2/ Source: Hacker News Title: SmolLM2 Feedly Summary: Comments AI Summary and Description: Yes Summary: The text introduces SmolLM2, a new family of compact language models from Hugging Face, designed for lightweight on-device operations. The models, which range from 135M to 1.7B parameters, were trained on 11 trillion tokens across diverse datasets, showcasing… 
- 
		
		
		Simon Willison’s Weblog: SmolLM2Source URL: https://simonwillison.net/2024/Nov/2/smollm2/#atom-everything Source: Simon Willison’s Weblog Title: SmolLM2 Feedly Summary: SmolLM2 New from Loubna Ben Allal and her research team at Hugging Face: SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough…