Tag: adversarial inputs
-
Hacker News: Scaling Up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
Source URL: https://arxiv.org/abs/2502.05171 Source: Hacker News Title: Scaling Up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses a novel language model architecture that enhances test-time computation through latent reasoning, presenting a new methodology that contrasts with traditional reasoning models. It emphasizes the…
-
Simon Willison’s Weblog: Trading Inference-Time Compute for Adversarial Robustness
Source URL: https://simonwillison.net/2025/Jan/22/trading-inference-time-compute/ Source: Simon Willison’s Weblog Title: Trading Inference-Time Compute for Adversarial Robustness Feedly Summary: Trading Inference-Time Compute for Adversarial Robustness Brand new research paper from OpenAI, exploring how inference-scaling “reasoning" models such as o1 might impact the search for improved security with respect to things like prompt injection. We conduct experiments on the…
-
OpenAI : Trading inference-time compute for adversarial robustness
Source URL: https://openai.com/index/trading-inference-time-compute-for-adversarial-robustness Source: OpenAI Title: Trading inference-time compute for adversarial robustness Feedly Summary: Trading Inference-Time Compute for Adversarial Robustness AI Summary and Description: Yes Summary: The text explores the trade-offs between inference-time computing demands and adversarial robustness within AI systems, particularly relevant in the context of machine learning and AI security. This topic holds…
-
Schneier on Security: Jailbreaking LLM-Controlled Robots
Source URL: https://www.schneier.com/blog/archives/2024/12/jailbreaking-llm-controlled-robots.html Source: Schneier on Security Title: Jailbreaking LLM-Controlled Robots Feedly Summary: Surprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions. AI Summary and Description: Yes Summary: The text highlights a significant vulnerability in LLM-controlled robots, revealing that they can be manipulated to bypass their safety protocols. This…
-
Simon Willison’s Weblog: 0xfreysa/agent
Source URL: https://simonwillison.net/2024/Nov/29/0xfreysaagent/#atom-everything Source: Simon Willison’s Weblog Title: 0xfreysa/agent Feedly Summary: 0xfreysa/agent Freysa describes itself as “the world’s first adversarial agent game". On 22nd November they released an LLM-driven application which people could pay to message (using Ethereum), with access to tools that could transfer a prize pool to the message sender, ending the game.…
-
Hacker News: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
Source URL: https://arxiv.org/abs/2310.03684 Source: Hacker News Title: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Feedly Summary: Comments AI Summary and Description: Yes Summary: This text presents “SmoothLLM,” an innovative algorithm designed to enhance the security of Large Language Models (LLMs) against jailbreaking attacks, which manipulate models into producing undesirable content. The proposal highlights a…