Tag: Time

  • Slashdot: Hackers Threaten To Submit Artists’ Data To AI Models If Art Site Doesn’t Pay Up

    Source URL: https://it.slashdot.org/story/25/09/02/1936245/hackers-threaten-to-submit-artists-data-to-ai-models-if-art-site-doesnt-pay-up?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Hackers Threaten To Submit Artists’ Data To AI Models If Art Site Doesn’t Pay Up Feedly Summary: AI Summary and Description: Yes Summary: The ransomware attack by LunaLock presents a significant threat to data privacy and security, especially with its novel approach of threatening to submit stolen artwork to…

  • New York Times – Artificial Intelligence : The One Danger That Should Unite the U.S. and China

    Source URL: https://www.nytimes.com/2025/09/02/opinion/ai-us-china.html Source: New York Times – Artificial Intelligence Title: The One Danger That Should Unite the U.S. and China Feedly Summary: The U.S. and China must agree on a trust architecture for A.I. devices, or else rogue entities will destabilize these two superpower nations long before they get around to fighting a war.…

  • Simon Willison’s Weblog: Introducing gpt-realtime

    Source URL: https://simonwillison.net/2025/Sep/1/introducing-gpt-realtime/#atom-everything Source: Simon Willison’s Weblog Title: Introducing gpt-realtime Feedly Summary: Introducing gpt-realtime Released a few days ago (August 28th), gpt-realtime is OpenAI’s new “most advanced speech-to-speech model". It looks like this is a replacement for the older gpt-4o-realtime-preview model that was released last October. This is a slightly confusing release. The previous realtime…

  • Cisco Security Blog: Detecting Exposed LLM Servers: A Shodan Case Study on Ollama

    Source URL: https://feedpress.me/link/23535/17131153/detecting-exposed-llm-servers-shodan-case-study-on-ollama Source: Cisco Security Blog Title: Detecting Exposed LLM Servers: A Shodan Case Study on Ollama Feedly Summary: We uncovered 1,100+ exposed Ollama LLM servers—20% with open models—revealing critical security gaps and the need for better LLM threat monitoring. AI Summary and Description: Yes Summary: The text highlights the discovery of over 1,100…

  • The Register: LegalPwn: Tricking LLMs by burying badness in lawyerly fine print

    Source URL: https://www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/ Source: The Register Title: LegalPwn: Tricking LLMs by burying badness in lawyerly fine print Feedly Summary: Trust and believe – AI models trained to see ‘legal’ doc as super legit Researchers at security firm Pangea have discovered yet another way to trivially trick large language models (LLMs) into ignoring their guardrails. Stick…

  • Slashdot: OpenAI Is Scanning Users’ ChatGPT Conversations and Reporting Content To Police

    Source URL: https://yro.slashdot.org/story/25/08/31/2311231/openai-is-scanning-users-chatgpt-conversations-and-reporting-content-to-police Source: Slashdot Title: OpenAI Is Scanning Users’ ChatGPT Conversations and Reporting Content To Police Feedly Summary: AI Summary and Description: Yes Summary: The text highlights OpenAI’s controversial practice of monitoring user conversations in ChatGPT for threats, revealing significant security and privacy implications. This admission raises questions about the balance between safety and…

  • Tomasz Tunguz: The Rise and Fall of Vibe Coding

    Source URL: https://www.tomtunguz.com/the-rise-and-fall-of-vibe-coding/ Source: Tomasz Tunguz Title: The Rise and Fall of Vibe Coding Feedly Summary: We’re living through the “Wild West” era of AI-powered software development. Anyone can build custom solutions in minutes rather than months. This creative explosion heads toward a reckoning. Hidden maintenance costs of thousands of “vibe-coded” micro-apps will collide with…