Tag: sensitive applications
-
Slashdot: First Trial of Generative AI Therapy Shows It Might Help With Depression
Source URL: https://slashdot.org/story/25/03/29/101206/first-trial-of-generative-ai-therapy-shows-it-might-help-with-depression?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: First Trial of Generative AI Therapy Shows It Might Help With Depression Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the first clinical trial of Therabot, a generative AI therapy bot designed to assist individuals with mental health conditions. The trial results indicate that the AI-based…
-
Hacker News: Gemini hackers can deliver more potent attacks with a helping hand from Gemini
Source URL: https://arstechnica.com/security/2025/03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/ Source: Hacker News Title: Gemini hackers can deliver more potent attacks with a helping hand from Gemini Feedly Summary: Comments AI Summary and Description: Yes Summary: The provided text discusses the emerging threat of indirect prompt injection attacks on large language models (LLMs) like OpenAI’s GPT-3, GPT-4, and Google’s Gemini. It outlines…
-
The Register: Paragon spyware deployed against journalists and activists, Citizen Lab claims
Source URL: https://www.theregister.com/2025/03/21/paragon_spyx_hacked/ Source: The Register Title: Paragon spyware deployed against journalists and activists, Citizen Lab claims Feedly Summary: Plus: Customer info stolen from ‘parental control’ software slinger SpyX; F-35 kill switch denied Infosec newsbytes Israeli spyware maker Paragon Solutions pitches its tools as helping governments and law enforcement agencies to catch criminals and terrorists,…
-
The Register: Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o
Source URL: https://www.theregister.com/2025/02/27/llm_emergent_misalignment_study/ Source: The Register Title: Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o Feedly Summary: Model was fine-tuned to write vulnerable software – then suggested enslaving humanity Computer scientists have found that fine-tuning notionally safe large language models to do one thing badly can negatively…