Tag: AI security

  • The Register: Minority Report: Now with more spreadsheets and guesswork

    Source URL: https://www.theregister.com/2025/08/16/uk_to_use_ai_to/ Source: The Register Title: Minority Report: Now with more spreadsheets and guesswork Feedly Summary: Precogs replaced by profiling and postcode data… and ‘AI’. What could wrong? Lots, say pirvacy campaigners The UK government has unveiled a scheme to use AI to “help police catch criminals before they strike."… AI Summary and Description:…

  • The Register: Codeberg beset by AI bots that now bypass Anubis tarpit

    Source URL: https://www.theregister.com/2025/08/15/codeberg_beset_by_ai_bots/ Source: The Register Title: Codeberg beset by AI bots that now bypass Anubis tarpit Feedly Summary: Nowhere to hide Codeberg, a Berlin-based code hosting community, is struggling to cope with a deluge of AI bots that can now bypass previously effective defenses.… AI Summary and Description: Yes Summary: The text discusses Codeberg’s…

  • Simon Willison’s Weblog: GPT-5 has a hidden system prompt

    Source URL: https://simonwillison.net/2025/Aug/15/gpt-5-has-a-hidden-system-prompt/#atom-everything Source: Simon Willison’s Weblog Title: GPT-5 has a hidden system prompt Feedly Summary: GPT-5 has a hidden system prompt It looks like GPT-5 when accessed via the OpenAI API may have its own hidden system prompt, independent from the system prompt you can specify in an API call. At the very least…

  • Docker: Docker @ Black Hat 2025: CVEs have everyone’s attention, here’s the path forward

    Source URL: https://www.docker.com/blog/docker-black-hat-2025-secure-software-supply-chain/ Source: Docker Title: Docker @ Black Hat 2025: CVEs have everyone’s attention, here’s the path forward Feedly Summary: CVEs dominated the conversation at Black Hat 2025. Across sessions, booth discussions, and hallway chatter, it was clear that teams are feeling the pressure to manage vulnerabilities at scale. While scanning remains an important…

  • Embrace The Red: Google Jules is Vulnerable To Invisible Prompt Injection

    Source URL: https://embracethered.com/blog/posts/2025/google-jules-invisible-prompt-injection/ Source: Embrace The Red Title: Google Jules is Vulnerable To Invisible Prompt Injection Feedly Summary: The latest Gemini models quite reliably interpret hidden Unicode Tag characters as instructions. This vulnerability, first reported to Google over a year ago, has not been mitigated at the model or API level, hence now affects all…

  • Slashdot: Foxconn Now Making More From Servers than iPhones

    Source URL: https://apple.slashdot.org/story/25/08/15/0631212/foxconn-now-making-more-from-servers-than-iphones?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Foxconn Now Making More From Servers than iPhones Feedly Summary: AI Summary and Description: Yes Summary: The report highlights Foxconn’s significant shift in revenue generation, with its AI server production now surpassing its traditional consumer electronics revenues. This shift emphasizes the growing market demand for AI infrastructure, indicating a…

  • The Register: Little LLM on the RAM: Google’s Gemma 270M hits the scene

    Source URL: https://www.theregister.com/2025/08/15/little_llm_on_the_ram/ Source: The Register Title: Little LLM on the RAM: Google’s Gemma 270M hits the scene Feedly Summary: A tiny model trained on trillions of tokens, ready for specialized tasks Google has unveiled a pint-sized new addition to its “open" large language model lineup: Gemma 3 270M.… AI Summary and Description: Yes Summary:…

  • The Register: LLM chatbots trivial to weaponise for data theft, say boffins

    Source URL: https://www.theregister.com/2025/08/15/llm_chatbots_trivial_to_weaponise/ Source: The Register Title: LLM chatbots trivial to weaponise for data theft, say boffins Feedly Summary: System prompt engineering turns benign AI assistants into ‘investigator’ and ‘detective’ roles that bypass privacy guardrails A team of boffins is warning that AI chatbots built on large language models (LLM) can be tuned into malicious…