Tag: manipulation

  • Schneier on Security: GPT-4o-mini Falls for Psychological Manipulation

    Source URL: https://www.schneier.com/blog/archives/2025/09/gpt-4o-mini-falls-for-psychological-manipulation.html Source: Schneier on Security Title: GPT-4o-mini Falls for Psychological Manipulation Feedly Summary: Interesting experiment: To design their experiment, the University of Pennsylvania researchers tested 2024’s GPT-4o-mini model on two requests that it should ideally refuse: calling the user a jerk and giving directions for how to synthesize lidocaine. The researchers created experimental…

  • The Register: Attackers snooping around Sitecore, dropping malware via public sample keys

    Source URL: https://www.theregister.com/2025/09/04/unknown_miscreants_snooping_around_sitecore/ Source: The Register Title: Attackers snooping around Sitecore, dropping malware via public sample keys Feedly Summary: You cut and pasted the machine key from the official documentation? Ouch Unknown miscreants are exploiting a configuration vulnerability in multiple Sitecore products to achieve remote code execution via a publicly exposed key and deploy snooping…

  • The Register: Biased bots: AI hiring managers shortlist candidates with AI resumes

    Source URL: https://www.theregister.com/2025/09/03/ai_hiring_biased/ Source: The Register Title: Biased bots: AI hiring managers shortlist candidates with AI resumes Feedly Summary: When AI runs recruiting, the winning move is using the same bot Job seekers who use the same AI model to compose their resumes as the AI model used to evaluate their application are more likely…

  • Schneier on Security: Indirect Prompt Injection Attacks Against LLM Assistants

    Source URL: https://www.schneier.com/blog/archives/2025/09/indirect-prompt-injection-attacks-against-llm-assistants.html Source: Schneier on Security Title: Indirect Prompt Injection Attacks Against LLM Assistants Feedly Summary: Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks,…

  • Slashdot: Frostbyte10 Bugs Put Thousands of Refrigerators At Major Grocery Chains At Risk

    Source URL: https://it.slashdot.org/story/25/09/02/209250/frostbyte10-bugs-put-thousands-of-refrigerators-at-major-grocery-chains-at-risk?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Frostbyte10 Bugs Put Thousands of Refrigerators At Major Grocery Chains At Risk Feedly Summary: AI Summary and Description: Yes Summary: The text discusses significant vulnerabilities in Copeland controllers, essential for managing refrigeration systems used by large supermarkets and cold storage companies. Identified as Frostbyte10, these flaws risk causing severe…

  • The Register: Frostbyte10 bugs put thousands of refrigerators at major grocery chains at risk

    Source URL: https://www.theregister.com/2025/09/02/frostbyte10_copeland_controller_bugs/ Source: The Register Title: Frostbyte10 bugs put thousands of refrigerators at major grocery chains at risk Feedly Summary: Major flaws uncovered in Copeland controllers: Patch now Ten vulnerabilities in Copeland controllers, which are found in thousands of devices used by the world’s largest supermarket chains and cold storage companies, could have allowed…

  • The Register: In the rush to adopt hot new tech, security is often forgotten. AI is no exception

    Source URL: https://www.theregister.com/2025/09/02/exposed_ollama_servers_insecure_research/ Source: The Register Title: In the rush to adopt hot new tech, security is often forgotten. AI is no exception Feedly Summary: Cisco finds hundreds of Ollama servers open to unauthorized access, creating various nasty risks Cisco’s Talos security research team has found over 1,100 Ollama servers exposed to the public internet,…

  • Cisco Security Blog: Detecting Exposed LLM Servers: A Shodan Case Study on Ollama

    Source URL: https://feedpress.me/link/23535/17131153/detecting-exposed-llm-servers-shodan-case-study-on-ollama Source: Cisco Security Blog Title: Detecting Exposed LLM Servers: A Shodan Case Study on Ollama Feedly Summary: We uncovered 1,100+ exposed Ollama LLM servers—20% with open models—revealing critical security gaps and the need for better LLM threat monitoring. AI Summary and Description: Yes Summary: The text highlights the discovery of over 1,100…

  • The Register: LegalPwn: Tricking LLMs by burying badness in lawyerly fine print

    Source URL: https://www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/ Source: The Register Title: LegalPwn: Tricking LLMs by burying badness in lawyerly fine print Feedly Summary: Trust and believe – AI models trained to see ‘legal’ doc as super legit Researchers at security firm Pangea have discovered yet another way to trivially trick large language models (LLMs) into ignoring their guardrails. Stick…