Tag: manipulation

  • Slashdot: Microsoft Warns Excel’s New AI Function ‘Can Give Incorrect Responses’ in High-Stakes Scenarios

    Source URL: https://it.slashdot.org/story/25/08/20/128217/microsoft-warns-excels-new-ai-function-can-give-incorrect-responses-in-high-stakes-scenarios?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Microsoft Warns Excel’s New AI Function ‘Can Give Incorrect Responses’ in High-Stakes Scenarios Feedly Summary: AI Summary and Description: Yes Summary: Microsoft is testing a new AI feature called COPILOT in Excel that leverages OpenAI’s gpt-4.1-mini model for automating spreadsheet tasks through natural language. While it presents innovative capabilities…

  • Cisco Talos Blog: Russian state-sponsored espionage group Static Tundra compromises unpatched end-of-life network devices

    Source URL: https://blog.talosintelligence.com/static-tundra/ Source: Cisco Talos Blog Title: Russian state-sponsored espionage group Static Tundra compromises unpatched end-of-life network devices Feedly Summary: A Russian state-sponsored group, Static Tundra, is exploiting an old Cisco IOS vulnerability to compromise unpatched network devices worldwide, targeting key sectors for intelligence gathering. AI Summary and Description: Yes Summary: The text provides…

  • Schneier on Security: Subverting AIOps Systems Through Poisoned Input Data

    Source URL: https://www.schneier.com/blog/archives/2025/08/subverting-aiops-systems-through-poisoned-input-data.html Source: Schneier on Security Title: Subverting AIOps Systems Through Poisoned Input Data Feedly Summary: In this input integrity attack against an AI system, researchers were able to fool AIOps tools: AIOps refers to the use of LLM-based agents to gather and analyze application telemetry, including system logs, performance metrics, traces, and alerts,…

  • Embrace The Red: Amp Code: Invisible Prompt Injection Fixed by Sourcegraph

    Source URL: https://embracethered.com/blog/posts/2025/amp-code-fixed-invisible-prompt-injection/ Source: Embrace The Red Title: Amp Code: Invisible Prompt Injection Fixed by Sourcegraph Feedly Summary: In this post we will look at Amp, a coding agent from Sourcegraph. The other day we discussed how invisible instructions impact Google Jules. Turns out that many client applications are vulnerable to these kinds of attacks…

  • Slashdot: Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes

    Source URL: https://slashdot.org/story/25/08/12/2214243/cornell-researchers-develop-invisible-light-based-watermark-to-detect-deepfakes?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes Feedly Summary: AI Summary and Description: Yes Summary: Researchers at Cornell University have developed an innovative watermarking system based on coded light, enhancing the detection of deepfakes through a method that requires no special hardware. This system offers a more…

  • Embrace The Red: GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773)

    Source URL: https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/ Source: Embrace The Red Title: GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773) Feedly Summary: This post is about an important, but also scary, prompt injection discovery that leads to full system compromise of the developer’s machine in GitHub Copilot and VS Code. It is achieved by placing Copilot into YOLO…

  • Embrace The Red: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets

    Source URL: https://embracethered.com/blog/posts/2025/openhands-the-lethal-trifecta-strikes-again/ Source: Embrace The Red Title: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets Feedly Summary: Another day, another AI data exfiltration exploit. Today we talk about OpenHands, formerly referred to as OpenDevin initially. It’s created by All-Hands AI. OpenHands renders images in chat, which enables zero-click data exfiltration during prompt injection…

  • Slashdot: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise

    Source URL: https://it.slashdot.org/story/25/08/08/2113251/red-teams-jailbreak-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise Feedly Summary: AI Summary and Description: Yes Summary: The text highlights significant security vulnerabilities in the newly released GPT-5 model, noting that it was easily jailbroken within a short timeframe. The results from different red teaming efforts…

  • The Register: Infosec hounds spot prompt injection vuln in Google Gemini apps

    Source URL: https://www.theregister.com/2025/08/08/infosec_hounds_spot_prompt_injection/ Source: The Register Title: Infosec hounds spot prompt injection vuln in Google Gemini apps Feedly Summary: Not a very smart home: crims could hijack smart-home boiler, open and close powered windows and more. Now fixed Black hat A trio of researchers has disclosed a major prompt injection vulnerability in Google’s Gemini large…