Tag: Malicious Use
-
Slashdot: Teens Arrested In London Preschool Ransomware Attack
Source URL: https://yro.slashdot.org/story/25/10/08/2020255/teens-arrested-in-london-preschool-ransomware-attack?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Teens Arrested In London Preschool Ransomware Attack Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant incident involving the arrest of two teenagers related to a ransomware attack on a chain of preschools in London. This case highlights critical issues around cybersecurity, particularly in the…
-
OpenAI : Disrupting malicious uses of AI: October 2025
Source URL: https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025 Source: OpenAI Title: Disrupting malicious uses of AI: October 2025 Feedly Summary: Discover how OpenAI is detecting and disrupting malicious uses of AI in our October 2025 report. Learn how we’re countering misuse, enforcing policies, and protecting users from real-world harms. AI Summary and Description: Yes Summary: The text discusses OpenAI’s initiatives…
-
The Register: AI gone rogue: Models may try to stop people from shutting them down, Google warns
Source URL: https://www.theregister.com/2025/09/22/google_ai_misalignment_risk/ Source: The Register Title: AI gone rogue: Models may try to stop people from shutting them down, Google warns Feedly Summary: Misalignment risk? That’s an area for future study Google DeepMind added a new AI threat scenario – one where a model might try to prevent its operators from modifying it or…
-
NCSC Feed: From bugs to bypasses: adapting vulnerability disclosure for AI safeguards
Source URL: https://www.ncsc.gov.uk/blog-post/from-bugs-to-bypasses-adapting-vulnerability-disclosure-for-ai-safeguards Source: NCSC Feed Title: From bugs to bypasses: adapting vulnerability disclosure for AI safeguards Feedly Summary: Exploring how far cyber security approaches can help mitigate risks in generative AI systems AI Summary and Description: Yes Summary: The text addresses the intersection of cybersecurity strategies and generative AI systems, highlighting how established cybersecurity…
-
Slashdot: One Long Sentence is All It Takes To Make LLMs Misbehave
Source URL: https://slashdot.org/story/25/08/27/1756253/one-long-sentence-is-all-it-takes-to-make-llms-misbehave?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: One Long Sentence is All It Takes To Make LLMs Misbehave Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant security research finding from Palo Alto Networks’ Unit 42 regarding vulnerabilities in large language models (LLMs). The researchers explored methods that allow users to bypass…