Tag: Unit 42
-
Unit 42: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust
Source URL: https://unit42.paloaltonetworks.com/model-namespace-reuse/ Source: Unit 42 Title: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust Feedly Summary: Model namespace reuse is a potential security risk in the AI supply chain. Attackers can misuse platforms like Hugging Face for remote code execution. The post Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model…
-
Unit 42: Threat Brief: Salesloft Drift Integration Used To Compromise Salesforce Instances
Source URL: https://unit42.paloaltonetworks.com/threat-brief-compromised-salesforce-instances/ Source: Unit 42 Title: Threat Brief: Salesloft Drift Integration Used To Compromise Salesforce Instances Feedly Summary: This Threat Brief discusses observations on a campaign leveraging Salesloft Drift integration to exfiltrate data via compromised OAuth credentials. The post Threat Brief: Salesloft Drift Integration Used To Compromise Salesforce Instances appeared first on Unit 42.…
-
Slashdot: One Long Sentence is All It Takes To Make LLMs Misbehave
Source URL: https://slashdot.org/story/25/08/27/1756253/one-long-sentence-is-all-it-takes-to-make-llms-misbehave?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: One Long Sentence is All It Takes To Make LLMs Misbehave Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant security research finding from Palo Alto Networks’ Unit 42 regarding vulnerabilities in large language models (LLMs). The researchers explored methods that allow users to bypass…
-
The Register: One long sentence is all it takes to make LLMs misbehave
Source URL: https://www.theregister.com/2025/08/26/breaking_llms_for_fun/ Source: The Register Title: One long sentence is all it takes to make LLMs misbehave Feedly Summary: Chatbots ignore their guardrails when your grammar sucks, researchers find Security researchers from Palo Alto Networks’ Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it’s…
-
Unit 42: Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety
Source URL: https://unit42.paloaltonetworks.com/logit-gap-steering-impact/ Source: Unit 42 Title: Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety Feedly Summary: New research from Unit 42 on logit-gap steering reveals how internal alignment measures can be bypassed, making external AI security vital. The post Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety appeared…
-
Unit 42: Fashionable Phishing Bait: GenAI on the Hook
Source URL: https://unit42.paloaltonetworks.com/genai-phishing-bait/ Source: Unit 42 Title: Fashionable Phishing Bait: GenAI on the Hook Feedly Summary: GenAI-created phishing campaigns misuse tools ranging from website builders to text generators in order to create more convincing and scalable attacks. The post Fashionable Phishing Bait: GenAI on the Hook appeared first on Unit 42. AI Summary and Description:…