Tag: malicious instructions
-
Google Online Security Blog: Mitigating prompt injection attacks with a layered defense strategy
Source URL: http://security.googleblog.com/2025/06/mitigating-prompt-injection-attacks.html Source: Google Online Security Blog Title: Mitigating prompt injection attacks with a layered defense strategy Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses emerging security threats associated with generative AI, particularly focusing on indirect prompt injections that manipulate AI systems through hidden malicious instructions. Google outlines its layered security…
-
Wired: AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks
Source URL: https://arstechnica.com/security/2025/04/ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-heres-why/ Source: Wired Title: AI Code Hallucinations Increase the Risk of ‘Package Confusion’ Attacks Feedly Summary: A new study found that code generated by AI is more likely to contain made-up information that can be used to trick software into interacting with malicious code. AI Summary and Description: Yes Summary: The text reports…
-
Schneier on Security: Applying Security Engineering to Prompt Injection Security
Source URL: https://www.schneier.com/blog/archives/2025/04/applying-security-engineering-to-prompt-injection-security.html Source: Schneier on Security Title: Applying Security Engineering to Prompt Injection Security Feedly Summary: This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police…