Tag: over
-
Hacker News: Show HN: DeepSeek My User Agent
Source URL: https://www.jasonthorsness.com/20 Source: Hacker News Title: Show HN: DeepSeek My User Agent Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses “DeepSeek R1,” a newly launched model and service that introduces chain-of-thought capabilities to users. It offers functionalities for live interaction and API access, with competitive pricing compared to existing models…
-
Slashdot: Bad Week for Unoccupied Waymo Cars: One Hit in Fatal Collision, One Vandalized by Mob
Source URL: https://tech.slashdot.org/story/25/01/26/2150209/bad-week-for-unoccupied-waymo-cars-one-hit-in-fatal-collision-one-vandalized-by-mob?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Bad Week for Unoccupied Waymo Cars: One Hit in Fatal Collision, One Vandalized by Mob Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant incident involving a self-driving car from Waymo that was involved in a fatal accident, marking a historic event in the realm…
-
Simon Willison’s Weblog: Quoting Paul Gauthier
Source URL: https://simonwillison.net/2025/Jan/26/paul-gauthier/ Source: Simon Willison’s Weblog Title: Quoting Paul Gauthier Feedly Summary: In my experience with AI coding, very large context windows aren’t useful in practice. Every model seems to get confused when you feed them more than ~25-30k tokens. The models stop obeying their system prompts, can’t correctly find/transcribe pieces of code in…
-
Hacker News: Qwen2.5-1M: Deploy Your Own Qwen with Context Length Up to 1M Tokens
Source URL: https://qwenlm.github.io/blog/qwen2.5-1m/ Source: Hacker News Title: Qwen2.5-1M: Deploy Your Own Qwen with Context Length Up to 1M Tokens Feedly Summary: Comments AI Summary and Description: Yes Summary: The text reports on the new release of the open-source Qwen2.5-1M models, capable of processing up to one million tokens, significantly improving inference speed and model performance…
-
Hacker News: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M
Source URL: https://simonwillison.net/2025/Jan/26/qwen25-1m/ Source: Hacker News Title: Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M Feedly Summary: Comments AI Summary and Description: Yes Summary: The Qwen 2.5 model release from Alibaba introduces a significant advancement in Large Language Model (LLM) capabilities with its ability to process up to 1 million tokens. This increase in input capacity is made possible through…
-
Hacker News: Explainer: What’s R1 and Everything Else?
Source URL: https://timkellogg.me/blog/2025/01/25/r1 Source: Hacker News Title: Explainer: What’s R1 and Everything Else? Feedly Summary: Comments AI Summary and Description: Yes Summary: The text provides an informative overview of recent developments in AI, particularly focusing on Reasoning Models and their significance in the ongoing evolution of AI technologies. It discusses the releases of models such…