Tag: safety guardrails
-
Cisco Talos Blog: Cybercriminal abuse of large language models
Source URL: https://blog.talosintelligence.com/cybercriminal-abuse-of-large-language-models/ Source: Cisco Talos Blog Title: Cybercriminal abuse of large language models Feedly Summary: Cybercriminals are increasingly gravitating towards uncensored LLMs, cybercriminal-designed LLMs and jailbreaking legitimate LLMs. AI Summary and Description: Yes **Summary:** The provided text discusses how cybercriminals exploit artificial intelligence technologies, particularly large language models (LLMs), to enhance their criminal activities.…
-
The Register: Microsoft names alleged credential-snatching ‘Azure Abuse Enterprise’ operators
Source URL: https://www.theregister.com/2025/02/28/microsoft_names_and_shames_4/ Source: The Register Title: Microsoft names alleged credential-snatching ‘Azure Abuse Enterprise’ operators Feedly Summary: Crew helped lowlifes generate X-rated celeb deepfakes using Redmond’s OpenAI-powered cloud – claim Microsoft has named four of the ten people it is suing for allegedly snatching Azure cloud credentials and developing tools to bypass safety guardrails in…
-
Unit 42: Investigating LLM Jailbreaking of Popular Generative AI Web Products
Source URL: https://unit42.paloaltonetworks.com/jailbreaking-generative-ai-web-products/ Source: Unit 42 Title: Investigating LLM Jailbreaking of Popular Generative AI Web Products Feedly Summary: We discuss vulnerabilities in popular GenAI web products to LLM jailbreaks. Single-turn strategies remain effective, but multi-turn approaches show greater success. The post Investigating LLM Jailbreaking of Popular Generative AI Web Products appeared first on Unit 42.…
-
Hacker News: Constitutional Classifiers: Defending against universal jailbreaks
Source URL: https://www.anthropic.com/research/constitutional-classifiers Source: Hacker News Title: Constitutional Classifiers: Defending against universal jailbreaks Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses a novel approach by the Anthropic Safeguards Research Team to defend AI models against jailbreaks through the use of Constitutional Classifiers. This system demonstrates robustness against various jailbreak techniques while…
-
Wired: DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot
Source URL: https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/ Source: Wired Title: DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot Feedly Summary: Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one. AI Summary and Description: Yes Summary: The text highlights the ongoing battle between hackers and security researchers…
-
The Register: Microsoft sues ‘foreign-based’ criminals, seizes sites used to abuse AI
Source URL: https://www.theregister.com/2025/01/13/microsoft_sues_foreignbased_crims_seizes/ Source: The Register Title: Microsoft sues ‘foreign-based’ criminals, seizes sites used to abuse AI Feedly Summary: Crooks stole API keys, then started a hacking-as-a-service biz Microsoft has sued a group of unnamed cybercriminals who developed tools to bypass safety guardrails in its generative AI tools. The tools were used to create harmful…
-
Schneier on Security: Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme
Source URL: https://www.schneier.com/blog/archives/2025/01/microsoft-takes-legal-action-against-ai-hacking-as-a-service-scheme.html Source: Schneier on Security Title: Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme Feedly Summary: Not sure this will matter in the end, but it’s a positive move: Microsoft is accusing three individuals of running a “hacking-as-a-service” scheme that was designed to allow the creation of harmful and illicit…
-
Unit 42: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability
Source URL: https://unit42.paloaltonetworks.com/?p=138017 Source: Unit 42 Title: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability Feedly Summary: The jailbreak technique “Bad Likert Judge" manipulates LLMs to generate harmful content using Likert scales, exposing safety gaps in LLM guardrails. The post Bad Likert Judge: A Novel Multi-Turn Technique to…