Tag: model integrity

  • Google Online Security Blog: Taming the Wild West of ML: Practical Model Signing with Sigstore

    Source URL: http://security.googleblog.com/2025/04/taming-wild-west-of-ml-practical-model.html Source: Google Online Security Blog Title: Taming the Wild West of ML: Practical Model Signing with Sigstore Feedly Summary: AI Summary and Description: Yes Summary: The text announces the launch of a model signing library developed by the Google Open Source Security Team in collaboration with NVIDIA and HiddenLayer, aimed at enhancing…

  • Cloud Blog: Announcing AI Protection: Security for the AI era

    Source URL: https://cloud.google.com/blog/products/identity-security/introducing-ai-protection-security-for-the-ai-era/ Source: Cloud Blog Title: Announcing AI Protection: Security for the AI era Feedly Summary: As AI use increases, security remains a top concern, and we often hear that organizations are worried about risks that can come with rapid adoption. Google Cloud is committed to helping our customers confidently build and deploy AI…

  • Microsoft Security Blog: Securing generative AI models on Azure AI Foundry

    Source URL: https://www.microsoft.com/en-us/security/blog/2025/03/04/securing-generative-ai-models-on-azure-ai-foundry/ Source: Microsoft Security Blog Title: Securing generative AI models on Azure AI Foundry Feedly Summary: Discover how Microsoft secures AI models on Azure AI Foundry, ensuring robust security and trustworthy deployments for your AI systems. The post Securing generative AI models on Azure AI Foundry appeared first on Microsoft Security Blog. AI…

  • Simon Willison’s Weblog: Qwen: Extending the Context Length to 1M Tokens

    Source URL: https://simonwillison.net/2024/Nov/18/qwen-turbo/#atom-everything Source: Simon Willison’s Weblog Title: Qwen: Extending the Context Length to 1M Tokens Feedly Summary: Qwen: Extending the Context Length to 1M Tokens The new Qwen2.5-Turbo boasts a million token context window (up from 128,000 for Qwen 2.5) and faster performance: Using sparse attention mechanisms, we successfully reduced the time to first…

  • Hacker News: The Beginner’s Guide to Visual Prompt Injections

    Source URL: https://www.lakera.ai/blog/visual-prompt-injections Source: Hacker News Title: The Beginner’s Guide to Visual Prompt Injections Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses security vulnerabilities inherent in Large Language Models (LLMs), particularly focusing on visual prompt injections. As the reliance on models like GPT-4 increases for various tasks, concerns regarding the potential…