Tag: tokens

  • Krebs on Security: A Tumultuous Week for Federal Cybersecurity Efforts

    Source URL: https://krebsonsecurity.com/2025/01/a-tumultuous-week-for-federal-cybersecurity-efforts/ Source: Krebs on Security Title: A Tumultuous Week for Federal Cybersecurity Efforts Feedly Summary: President Trump last week issued a flurry of executive orders that upended a number of government initiatives focused on improving the nation’s cybersecurity posture. The president fired all advisors from the Department of Homeland Security’s Cyber Safety Review…

  • Simon Willison’s Weblog: The impact of competition and DeepSeek on Nvidia

    Source URL: https://simonwillison.net/2025/Jan/27/deepseek-nvidia/ Source: Simon Willison’s Weblog Title: The impact of competition and DeepSeek on Nvidia Feedly Summary: The impact of competition and DeepSeek on Nvidia Long, excellent piece by Jeffrey Emanuel capturing the current state of the AI/LLM industry. The original title is “The Short Case for Nvidia Stock" – I’m using the Hacker…

  • Simon Willison’s Weblog: Quoting Paul Gauthier

    Source URL: https://simonwillison.net/2025/Jan/26/paul-gauthier/ Source: Simon Willison’s Weblog Title: Quoting Paul Gauthier Feedly Summary: In my experience with AI coding, very large context windows aren’t useful in practice. Every model seems to get confused when you feed them more than ~25-30k tokens. The models stop obeying their system prompts, can’t correctly find/transcribe pieces of code in…

  • Simon Willison’s Weblog: Anomalous Tokens in DeepSeek-V3 and r1

    Source URL: https://simonwillison.net/2025/Jan/26/anomalous-tokens-in-deepseek-v3-and-r1/#atom-everything Source: Simon Willison’s Weblog Title: Anomalous Tokens in DeepSeek-V3 and r1 Feedly Summary: Anomalous Tokens in DeepSeek-V3 and r1 Glitch tokens (previously) are tokens or strings that trigger strange behavior in LLMs, hinting at oddities in their tokenizers or model weights. Here’s a fun exploration of them across DeepSeek v3 and R1.…

  • Hacker News: Qwen2.5-1M: Deploy Your Own Qwen with Context Length Up to 1M Tokens

    Source URL: https://qwenlm.github.io/blog/qwen2.5-1m/ Source: Hacker News Title: Qwen2.5-1M: Deploy Your Own Qwen with Context Length Up to 1M Tokens Feedly Summary: Comments AI Summary and Description: Yes Summary: The text reports on the new release of the open-source Qwen2.5-1M models, capable of processing up to one million tokens, significantly improving inference speed and model performance…

  • Hacker News: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M

    Source URL: https://simonwillison.net/2025/Jan/26/qwen25-1m/ Source: Hacker News Title: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M Feedly Summary: Comments AI Summary and Description: Yes Summary: The Qwen 2.5 model release from Alibaba introduces a significant advancement in Large Language Model (LLM) capabilities with its ability to process up to 1 million tokens. This increase in input capacity is made possible through…

  • Simon Willison’s Weblog: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens

    Source URL: https://simonwillison.net/2025/Jan/26/qwen25-1m/ Source: Simon Willison’s Weblog Title: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens Feedly Summary: Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens Very significant new release from Alibaba’s Qwen team. Their openly licensed (sometimes Apache 2, sometimes Qwen license, I’ve had trouble keeping…