Tag: data leakage
- 
		
		
		Embrace The Red: Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt InjectionSource URL: https://embracethered.com/blog/posts/2024/terminal-dillmas-prompt-injection-ansi-sequences/ Source: Embrace The Red Title: Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection Feedly Summary: Last week Leon Derczynski described how LLMs can output ANSI escape codes. These codes, also known as control characters, are interpreted by terminal emulators and modify behavior. This discovery resonates with areas I had… 
- 
		
		
		Hacker News: Reprompt (YC W24) Is Hiring an Engineer to Build Location AgentsSource URL: https://news.ycombinator.com/item?id=42316644 Source: Hacker News Title: Reprompt (YC W24) Is Hiring an Engineer to Build Location Agents Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses Reprompt’s development of AI agents for location services that enhance live information accuracy for mapping companies. It mentions the need for a senior engineer skilled… 
- 
		
		
		Hacker News: Garak, LLM Vulnerability ScannerSource URL: https://github.com/NVIDIA/garak Source: Hacker News Title: Garak, LLM Vulnerability Scanner Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text describes “garak,” a command-line vulnerability scanner specifically designed for large language models (LLMs). This tool aims to uncover various weaknesses in LLMs, such as hallucination, prompt injection attacks, and data leakage. Its development… 
- 
		
		
		Hacker News: Everything I’ve learned so far about running local LLMsSource URL: https://nullprogram.com/blog/2024/11/10/ Source: Hacker News Title: Everything I’ve learned so far about running local LLMs Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text provides an extensive exploration of Large Language Models (LLMs), detailing their evolution, practical applications, and implementation on personal hardware. It emphasizes the effects of LLMs on computing, discussions… 
- 
		
		
		CSA: How ISO 42001 Enhances AI Risk ManagementSource URL: https://www.schellman.com/blog/iso-certifications/how-to-assess-and-treat-ai-risks-and-impacts-with-iso42001 Source: CSA Title: How ISO 42001 Enhances AI Risk Management Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the adoption of ISO/IEC 42001:2023 as a global standard for AI governance, emphasizing a holistic approach to AI risk management that goes beyond traditional cybersecurity measures. StackAware’s implementation of this standard…