Tag: malicious instructions
-
Wired: AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks
Source URL: https://arstechnica.com/security/2025/04/ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-heres-why/ Source: Wired Title: AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks Feedly Summary: A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code. AI Summary and Description: Yes Summary: The text reports…
-
Schneier on Security: Applying Security Engineering to Prompt Injection Security
Source URL: https://www.schneier.com/blog/archives/2025/04/applying-security-engineering-to-prompt-injection-security.html Source: Schneier on Security Title: Applying Security Engineering to Prompt Injection Security Feedly Summary: This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police…
-
Slashdot: How AI Coding Assistants Could Be Compromised Via Rules File
Source URL: https://developers.slashdot.org/story/25/03/23/2138230/how-ai-coding-assistants-could-be-compromised-via-rules-file?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: How AI Coding Assistants Could Be Compromised Via Rules File Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant security vulnerability in AI coding assistants like GitHub Copilot and Cursor, highlighting how malicious rule configuration files can be used to inject backdoors and vulnerabilities in…