Tag: RMF

  • Slashdot: The Mac App Flea Market

    Source URL: https://apple.slashdot.org/story/25/09/16/0629209/the-mac-app-flea-market?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: The Mac App Flea Market Feedly Summary: AI Summary and Description: Yes Summary: The text highlights the emergence of numerous imitation applications in the Mac App Store that mimic official AI chat applications like ChatGPT. These copycat apps raise concerns regarding authenticity and security in the AI landscape. Detailed…

  • OpenAI : Building towards age prediction

    Source URL: https://openai.com/index/building-towards-age-prediction Source: OpenAI Title: Building towards age prediction Feedly Summary: Learn how OpenAI is building age prediction and parental controls in ChatGPT to create safer, age-appropriate experiences for teens while supporting families with new tools. AI Summary and Description: Yes Summary: OpenAI’s focus on age prediction and parental controls in ChatGPT demonstrates a…

  • Docker: From Hallucinations to Prompt Injection: Securing AI Workflows at Runtime

    Source URL: https://www.docker.com/blog/secure-ai-agents-runtime-security/ Source: Docker Title: From Hallucinations to Prompt Injection: Securing AI Workflows at Runtime Feedly Summary: How developers are embedding runtime security to safely build with AI agents Introduction: When AI Workflows Become Attack Surfaces The AI tools we use today are powerful, but also unpredictable and exploitable. You prompt an LLM and…

  • Schneier on Security: Indirect Prompt Injection Attacks Against LLM Assistants

    Source URL: https://www.schneier.com/blog/archives/2025/09/indirect-prompt-injection-attacks-against-llm-assistants.html Source: Schneier on Security Title: Indirect Prompt Injection Attacks Against LLM Assistants Feedly Summary: Really good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks,…

  • Unit 42: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust

    Source URL: https://unit42.paloaltonetworks.com/model-namespace-reuse/ Source: Unit 42 Title: Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model Name Trust Feedly Summary: Model namespace reuse is a potential security risk in the AI supply chain. Attackers can misuse platforms like Hugging Face for remote code execution. The post Model Namespace Reuse: An AI Supply-Chain Attack Exploiting Model…

  • The Register: Internet mapping and research tool Censys reveals state-based abuse, harassment

    Source URL: https://www.theregister.com/2025/09/03/censys_abuse_sigcomm_paper/ Source: The Register Title: Internet mapping and research tool Censys reveals state-based abuse, harassment Feedly Summary: ‘Universities are being used to proxy offensive government operations, turning research access decisions political’ Censys Inc, vendor of the popular Censys internet-mapping tool, has revealed that state-based actors are trying to abuse its services by hiding…

  • NCSC Feed: From bugs to bypasses: adapting vulnerability disclosure for AI safeguards

    Source URL: https://www.ncsc.gov.uk/blog-post/from-bugs-to-bypasses-adapting-vulnerability-disclosure-for-ai-safeguards Source: NCSC Feed Title: From bugs to bypasses: adapting vulnerability disclosure for AI safeguards Feedly Summary: Exploring how far cyber security approaches can help mitigate risks in generative AI systems AI Summary and Description: Yes Summary: The text addresses the intersection of cybersecurity strategies and generative AI systems, highlighting how established cybersecurity…