Tag: Qwen
-
Hacker News: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M
Source URL: https://simonwillison.net/2025/Jan/26/qwen25-1m/ Source: Hacker News Title: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M Feedly Summary: Comments AI Summary and Description: Yes Summary: The Qwen 2.5 model release from Alibaba introduces a significant advancement in Large Language Model (LLM) capabilities with its ability to process up to 1 million tokens. This increase in input capacity is made possible through…
-
Simon Willison’s Weblog: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens
Source URL: https://simonwillison.net/2025/Jan/26/qwen25-1m/ Source: Simon Willison’s Weblog Title: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens Feedly Summary: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens Very significant new release from Alibaba’s Qwen team. Their openly licensed (sometimes Apache 2, sometimes Qwen license, I’ve had trouble keeping…
-
Hacker News: Official DeepSeek R1 Now on Ollama
Source URL: https://ollama.com/library/deepseek-r1 Source: Hacker News Title: Official DeepSeek R1 Now on Ollama Feedly Summary: Comments AI Summary and Description: Yes Summary: The text provides an overview of DeepSeek’s first-generation reasoning models that exhibit performance comparable to OpenAI’s offerings across math, code, and reasoning tasks. This information is highly relevant for practitioners in AI and…
-
Simon Willison’s Weblog: DeepSeek-R1 and exploring DeepSeek-R1-Distill-Llama-8B
Source URL: https://simonwillison.net/2025/Jan/20/deepseek-r1/ Source: Simon Willison’s Weblog Title: DeepSeek-R1 and exploring DeepSeek-R1-Distill-Llama-8B Feedly Summary: DeepSeek are the Chinese AI lab who dropped the best currently available open weights LLM on Christmas day, DeepSeek v3. That model was trained in part using their unreleased R1 “reasoning" model. Today they’ve released R1 itself, along with a whole…
-
Hacker News: DeepSeek-R1
Source URL: https://github.com/deepseek-ai/DeepSeek-R1 Source: Hacker News Title: DeepSeek-R1 Feedly Summary: Comments AI Summary and Description: Yes Summary: The text presents advancements in AI reasoning models, specifically DeepSeek-R1-Zero and DeepSeek-R1, emphasizing the unique approach of training solely through large-scale reinforcement learning (RL) without initial supervised fine-tuning. These models demonstrate significant reasoning capabilities and highlight breakthroughs in…
-
Simon Willison’s Weblog: Codestral 25.01
Source URL: https://simonwillison.net/2025/Jan/13/codestral-2501/ Source: Simon Willison’s Weblog Title: Codestral 25.01 Feedly Summary: Codestral 25.01 Brand new code-focused model from Mistral. Unlike the first Codestral this one isn’t (yet) available as open weights. The model has a 256k token context – a new record for Mistral. The new model scored an impressive joint first place with…
-
Hacker News: Killed by LLM
Source URL: https://r0bk.github.io/killedbyllm/ Source: Hacker News Title: Killed by LLM Feedly Summary: Comments AI Summary and Description: Yes Summary: The provided text discusses a methodology for documenting benchmarks related to Large Language Models (LLMs), highlighting the inconsistencies among various performance scores. This is particularly relevant for professionals in AI security and LLM security, as it…