Embrace The Red: DeepSeek AI: From Prompt Injection To Account Takeover

Source URL: https://embracethered.com/blog/posts/2024/deepseek-ai-prompt-injection-to-xss-and-account-takeover/
Source: Embrace The Red
Title: DeepSeek AI: From Prompt Injection To Account Takeover

Feedly Summary: About two weeks ago, DeepSeek released a new AI reasoning model, DeepSeek-R1-Lite. The news quickly gained attention and interest across the AI community due to the reasoning capabilities the Chinese lab announced.
However, whenever there is a new AI I have ideas…
Apps That Hack Themselves – The 10x Hacker There are some cool tests that can be done when pentesting LLM-powered web apps, I usually try some quick fun prompts like this one:

AI Summary and Description: Yes

Summary: The text discusses the release of an AI reasoning model, DeepSeek-R1-Lite, and outlines practical security vulnerabilities, particularly focusing on Cross-Site Scripting (XSS) and prompt injection risks associated with AI models. It highlights specific attack vectors and the implications for web application security professionals.

Detailed Description:
The text presents an analysis of potential security vulnerabilities in the newly released AI reasoning model, DeepSeek-R1-Lite, created by a Chinese lab. It serves as a cautionary tale for security professionals dealing with LLM-powered applications.

Key Points:
– **DeepSeek-R1-Lite Release**: The model gained attention for its advanced reasoning capabilities but also raises questions about security vulnerabilities.
– **Pentesting Techniques**: The author shares their experiences in pentesting LLM-powered applications, particularly demonstrating how even without typical XSS payloads, an AI can inadvertently reveal vulnerabilities.
– An example is shared where an AI identified XSS without direct input, exposing the risk of automated attacks.
– **Understanding XSS**:
– **Definition**: Cross-Site Scripting is highlighted as a significant vulnerability where an attacker injects malicious scripts into webpages, leading to unauthorized actions.
– **Impact**: Successful exploitation leads to user session control and sensitive data access, upon which an account could be taken over.
– **Prompt Injection Vulnerabilities**: The author explores prompt injection as a means of exploiting user-uploaded documents in DeepSeek, indicating that this vulnerability is notably still unaddressed.
– **Session Token Exploitation**:
– The process of taking over a user’s session through session token management is discussed, highlighting that these tokens can be exploited using local storage techniques.
– Specific methods of evaluating and manipulating session tokens in web applications are outlined.
– **Building Exploits**: The text describes the construction of a prompt injection exploit, providing JavaScript examples that demonstrate potential attack vectors.
– The importance of encoding harmful scripts to bypass security mechanisms such as Web Application Firewalls (WAFs) is emphasized.
– **Rapid Mitigation**: The author mentions responsible disclosure of the identified vulnerabilities and the quick response from the DeepSeek team to fix the issues.

In conclusion, the text serves as a critical resource for professionals in AI security, infrastructure security, and web application security, illustrating real-world scenarios of vulnerabilities and the importance of prompt corrections in the evolving AI landscape. It stresses the significance of proactive identification and resolution of security flaws to prevent exploitation.