Tag: harmful content
-
The Register: Microsoft sues ‘foreign-based’ criminals, seizes sites used to abuse AI
Source URL: https://www.theregister.com/2025/01/13/microsoft_sues_foreignbased_crims_seizes/ Source: The Register Title: Microsoft sues ‘foreign-based’ criminals, seizes sites used to abuse AI Feedly Summary: Crooks stole API keys, then started a hacking-as-a-service biz Microsoft has sued a group of unnamed cybercriminals who developed tools to bypass safety guardrails in its generative AI tools. The tools were used to create harmful…
-
Schneier on Security: Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme
Source URL: https://www.schneier.com/blog/archives/2025/01/microsoft-takes-legal-action-against-ai-hacking-as-a-service-scheme.html Source: Schneier on Security Title: Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme Feedly Summary: Not sure this will matter in the end, but it’s a positive move: Microsoft is accusing three individuals of running a “hacking-as-a-service” scheme that was designed to allow the creation of harmful and illicit…
-
Slashdot: New LLM Jailbreak Uses Models’ Evaluation Skills Against Them
Source URL: https://it.slashdot.org/story/25/01/12/2010218/new-llm-jailbreak-uses-models-evaluation-skills-against-them?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: New LLM Jailbreak Uses Models’ Evaluation Skills Against Them Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses a novel jailbreak technique for large language models (LLMs) known as the ‘Bad Likert Judge,’ which exploits the models’ evaluative capabilities to generate harmful content. Developed by Palo Alto…
-
Embrace The Red: AI Domination: Remote Controlling ChatGPT ZombAI Instances
Source URL: https://embracethered.com/blog/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/ Source: Embrace The Red Title: AI Domination: Remote Controlling ChatGPT ZombAI Instances Feedly Summary: At Black Hat Europe I did a fun presentation titled SpAIware and More: Advanced Prompt Injection Exploits. Without diving into the details of the entire talk, the key point I was making is that prompt injection can impact…
-
Unit 42: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability
Source URL: https://unit42.paloaltonetworks.com/?p=138017 Source: Unit 42 Title: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability Feedly Summary: The jailbreak technique “Bad Likert Judge" manipulates LLMs to generate harmful content using Likert scales, exposing safety gaps in LLM guardrails. The post Bad Likert Judge: A Novel Multi-Turn Technique to…
-
AWS News Blog: Amazon Bedrock Guardrails now supports multimodal toxicity detection with image support (preview)
Source URL: https://aws.amazon.com/blogs/aws/amazon-bedrock-guardrails-now-supports-multimodal-toxicity-detection-with-image-support/ Source: AWS News Blog Title: Amazon Bedrock Guardrails now supports multimodal toxicity detection with image support (preview) Feedly Summary: Build responsible AI applications – Safeguard them against harmful text and image content with configurable filters and thresholds. AI Summary and Description: Yes **Summary:** Amazon Bedrock has introduced multimodal toxicity detection with image…
-
AWS News Blog: Amazon Bedrock Guardrails now supports multimodal toxicity detection with image support (preview)
Source URL: https://aws.amazon.com/blogs/aws/amazon-bedrock-guardrails-now-supports-multimodal-toxicity-detection-with-image-support/ Source: AWS News Blog Title: Amazon Bedrock Guardrails now supports multimodal toxicity detection with image support (preview) Feedly Summary: Build responsible AI applications – Safeguard them against harmful text and image content with configurable filters and thresholds. AI Summary and Description: Yes **Summary:** Amazon Bedrock has introduced multimodal toxicity detection with image…