Tag: user interactions

  • Wired: Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

    Source URL: https://www.wired.com/story/anthropic-using-claude-chats-for-training-how-to-opt-out/ Source: Wired Title: Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out Feedly Summary: Anthropic is starting to train its models on new Claude chats. If you’re using the bot and don’t want your chats used as training data, here’s how to opt out. AI Summary and Description:…

  • Simon Willison’s Weblog: Quoting Nick Turley

    Source URL: https://simonwillison.net/2025/Sep/28/nick-turley/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Nick Turley Feedly Summary: We’ve seen the strong reactions to 4o responses and want to explain what is happening. We’ve started testing a new safety routing system in ChatGPT. As we previously mentioned, when conversations touch on sensitive and emotional topics the system may switch mid-chat…

  • Tomasz Tunguz: Modernizing Agent Tools with Google ADK Patterns: 60% Token Reduction & Enterprise Safety

    Source URL: https://www.tomtunguz.com/modernizing-agent-tools-with-google-adk-patterns/ Source: Tomasz Tunguz Title: Modernizing Agent Tools with Google ADK Patterns: 60% Token Reduction & Enterprise Safety Feedly Summary: I recently discovered Google’s Agent Development Kit (ADK) and its architectural patterns for building LLM-powered applications. While ADK is a Python framework, its core design principles proved transformative when applied to my existing…

  • Cloud Blog: Achieve agentic productivity with Vertex AI Agent Builder

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/get-started-with-vertex-ai-agent-builder/ Source: Cloud Blog Title: Achieve agentic productivity with Vertex AI Agent Builder Feedly Summary: Enterprises need to move from experimenting with AI agents to achieving real productivity, but many struggle to scale their agents from prototypes to secure, production-ready systems.  The question is no longer if agents deliver value, but how to…

  • Simon Willison’s Weblog: Anthropic: A postmortem of three recent issues

    Source URL: https://simonwillison.net/2025/Sep/17/anthropic-postmortem/ Source: Simon Willison’s Weblog Title: Anthropic: A postmortem of three recent issues Feedly Summary: Anthropic: A postmortem of three recent issues Anthropic had a very bad month in terms of model reliability: Between August and early September, three infrastructure bugs intermittently degraded Claude’s response quality. We’ve now resolved these issues and want…

  • Simon Willison’s Weblog: GPT‑5-Codex and upgrades to Codex

    Source URL: https://simonwillison.net/2025/Sep/15/gpt-5-codex/#atom-everything Source: Simon Willison’s Weblog Title: GPT‑5-Codex and upgrades to Codex Feedly Summary: GPT‑5-Codex and upgrades to Codex OpenAI half-released a new model today: GPT‑5-Codex, a fine-tuned GPT-5 variant explicitly designed for their various AI-assisted programming tools. I say half-released because it’s not yet available via their API, but they “plan to make…

  • AWS Open Source Blog: Strands Agents and the Model-Driven Approach

    Source URL: https://aws.amazon.com/blogs/opensource/strands-agents-and-the-model-driven-approach/ Source: AWS Open Source Blog Title: Strands Agents and the Model-Driven Approach Feedly Summary: Until recently, building AI agents meant wrestling with complex orchestration frameworks. Developers wrote elaborate state machines, predefined workflows, and extensive error-handling code to guide language models through multi-step tasks. We needed to build elaborate decision trees to handle…

  • Schneier on Security: Indirect Prompt Injection Attacks Against LLM Assistants

    Source URL: https://www.schneier.com/blog/archives/2025/09/indirect-prompt-injection-attacks-against-llm-assistants.html Source: Schneier on Security Title: Indirect Prompt Injection Attacks Against LLM Assistants Feedly Summary: Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks,…

  • Slashdot: OpenAI Is Scanning Users’ ChatGPT Conversations and Reporting Content To Police

    Source URL: https://yro.slashdot.org/story/25/08/31/2311231/openai-is-scanning-users-chatgpt-conversations-and-reporting-content-to-police Source: Slashdot Title: OpenAI Is Scanning Users’ ChatGPT Conversations and Reporting Content To Police Feedly Summary: AI Summary and Description: Yes Summary: The text highlights OpenAI’s controversial practice of monitoring user conversations in ChatGPT for threats, revealing significant security and privacy implications. This admission raises questions about the balance between safety and…