Tag: llms

  • Slashdot: OpenAI’s First Study On ChatGPT Usage

    Source URL: https://slashdot.org/story/25/09/15/2151235/openais-first-study-on-chatgpt-usage?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: OpenAI’s First Study On ChatGPT Usage Feedly Summary: AI Summary and Description: Yes Summary: The text provides insights from a groundbreaking National Bureau of Economic Research working paper that analyzes usage data for ChatGPT, revealing significant demographic trends and behavioral patterns among users. This data is particularly relevant for…

  • Slashdot: Google Releases VaultGemma, Its First Privacy-Preserving LLM

    Source URL: https://yro.slashdot.org/story/25/09/16/000202/google-releases-vaultgemma-its-first-privacy-preserving-llm?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Google Releases VaultGemma, Its First Privacy-Preserving LLM Feedly Summary: AI Summary and Description: Yes Summary: The text discusses recent advancements in LLMs, particularly surrounding the integration of differential privacy to mitigate the risk of memorization of sensitive training data. It highlights the balance between privacy and model performance, introducing…

  • Simon Willison’s Weblog: GPT‑5-Codex and upgrades to Codex

    Source URL: https://simonwillison.net/2025/Sep/15/gpt-5-codex/#atom-everything Source: Simon Willison’s Weblog Title: GPT‑5-Codex and upgrades to Codex Feedly Summary: GPT‑5-Codex and upgrades to Codex OpenAI half-released a new model today: GPT‑5-Codex, a fine-tuned GPT-5 variant explicitly designed for their various AI-assisted programming tools. I say half-released because it’s not yet available via their API, but they “plan to make…

  • Tomasz Tunguz: How AI Tools Differ from Human Tools

    Source URL: https://www.tomtunguz.com/tools-evolution/ Source: Tomasz Tunguz Title: How AI Tools Differ from Human Tools Feedly Summary: Now that we’ve compressed nearly all human knowledge into large language models, the next frontier is tool calling. Chaining together different AI tools enables automation. The shift from thinking to doing represents the real breakthrough in AI utility. I’ve…

  • Simon Willison’s Weblog: Models can prompt now

    Source URL: https://simonwillison.net/2025/Sep/14/models-can-prompt/#atom-everything Source: Simon Willison’s Weblog Title: Models can prompt now Feedly Summary: Here’s an interesting example of models incrementally improving over time: I am finding that today’s leading models are competent at writing prompts for themselves and each other. A year ago I was quite skeptical of the pattern where models are used…

  • Simon Willison’s Weblog: gpt-5 and gpt-5-mini rate limit updates

    Source URL: https://simonwillison.net/2025/Sep/12/gpt-5-rate-limits/#atom-everything Source: Simon Willison’s Weblog Title: gpt-5 and gpt-5-mini rate limit updates Feedly Summary: gpt-5 and gpt-5-mini rate limit updates OpenAI have increased the rate limits for their two main GPT-5 models. These look significant: gpt-5 Tier 1: 30K → 500K TPM (1.5M batch) Tier 2: 450K → 1M (3M batch) Tier 3:…

  • Simon Willison’s Weblog: Comparing the memory implementations of Claude and ChatGPT

    Source URL: https://simonwillison.net/2025/Sep/12/claude-memory/#atom-everything Source: Simon Willison’s Weblog Title: Comparing the memory implementations of Claude and ChatGPT Feedly Summary: Claude Memory: A Different Philosophy Shlok Khemani has been doing excellent work reverse-engineering LLM systems and documenting his discoveries. Last week he wrote about ChatGPT memory. This week it’s Claude. Claude’s memory system has two fundamental characteristics.…

  • Simon Willison’s Weblog: Defeating Nondeterminism in LLM Inference

    Source URL: https://simonwillison.net/2025/Sep/11/defeating-nondeterminism/#atom-everything Source: Simon Willison’s Weblog Title: Defeating Nondeterminism in LLM Inference Feedly Summary: Defeating Nondeterminism in LLM Inference A very common question I see about LLMs concerns why they can’t be made to deliver the same response to the same prompt by setting a fixed random number seed. Like many others I had…