Tag: authors

  • Slashdot: Anthropic Settles Major AI Copyright Suit Brought by Authors

    Source URL: https://yro.slashdot.org/story/25/08/26/1848219/anthropic-settles-major-ai-copyright-suit-brought-by-authors?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Anthropic Settles Major AI Copyright Suit Brought by Authors Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a settlement reached between Anthropic and a group of authors in a copyright class action lawsuit, underscoring the legal complexities surrounding AI development, particularly in relation to copyright issues.…

  • Unit 42: Keys to the Kingdom: Erlang/OTP SSH Vulnerability Analysis and Exploits Observed in the Wild

    Source URL: https://unit42.paloaltonetworks.com/erlang-otp-cve-2025-32433/ Source: Unit 42 Title: Keys to the Kingdom: Erlang/OTP SSH Vulnerability Analysis and Exploits Observed in the Wild Feedly Summary: CVE-2025-32433 allows for remote code execution in sshd for certain versions of Erlang programming language’s OTP. We reproduced this CVE and share our findings. The post Keys to the Kingdom: Erlang/OTP SSH…

  • Slashdot: AI Industry Horrified To Face Largest Copyright Class Action Ever Certified

    Source URL: https://yro.slashdot.org/story/25/08/08/2040214/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: AI Industry Horrified To Face Largest Copyright Class Action Ever Certified Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses the potential repercussions of a major copyright class action lawsuit against Anthropic, which could significantly impact the entire AI industry. Claims from industry groups suggest that if…

  • OpenAI : Estimating worst case frontier risks of open weight LLMs

    Source URL: https://openai.com/index/estimating-worst-case-frontier-risks-of-open-weight-llms Source: OpenAI Title: Estimating worst case frontier risks of open weight LLMs Feedly Summary: In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as capable as possible in two domains: biology and…

  • Embrace The Red: Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection

    Source URL: https://embracethered.com/blog/posts/2025/chatgpt-chat-history-data-exfiltration/ Source: Embrace The Red Title: Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection Feedly Summary: In this post we demonstrate how a bypass in OpenAI’s “safe URL” rendering feature allows ChatGPT to send personal information to a third-party server. This can be exploited by an adversary via a prompt injection…