Source URL: https://www.schneier.com/blog/archives/2025/02/implementing-cryptography-in-ai-systems.html
Source: Schneier on Security
Title: Implementing Cryptography in AI Systems
Feedly Summary: Interesting research: “How to Securely Implement Cryptography in Deep Neural Networks.”
Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose “bits” are arbitrary real numbers…
AI Summary and Description: Yes
Summary: The research discusses the secure implementation of cryptography within deep neural networks (DNNs), highlighting the unique challenges posed by the differing computational models of traditional cryptographic algorithms and DNNs. It introduces foundational theory necessary for assessing the correctness and security of cryptographic primitives implemented as ReLU-based DNNs and presents a novel method to achieve secure cryptosystems without significant overhead.
Detailed Description:
This research piece details an innovative approach to integrating cryptographic functions with deep learning models, specifically tackling the challenges posed by the inherent differences between digital cryptographic systems and the analog nature of deep neural networks. The significance of this work lies in its implications for AI Security and Software Security, particularly as organizations increasingly deploy DNNs in sensitive applications requiring robust security measures.
Key Points:
– **Compatibility Challenge:** DNNs use linear mappings and ReLU activations while traditional cryptographic primitives are designed for digital systems. This creates fundamental compatibility issues that the research aims to address.
– **Attack Mechanism:** The researchers demonstrate that standard implementations of block ciphers, like AES-128, are vulnerable when manipulated with nonstandard inputs, finding that an attacker could break these systems with a linear time complexity.
– **New Security Framework:** The paper proposes a theoretical framework for defining correctness and security in the DNN context, suggesting that conventional definitions need adaptation for these unique models.
– **Implementation Method:** The authors present a new methodology for securely implementing cryptographic functionalities as ReLU-based DNNs. This method is designed to maintain a low overhead, involving only a constant increase in the number of layers and a linear increase in neurons, making it practical for real-world applications.
– **Practical Implications:** With organizations adopting AI more extensively, ensuring that cryptographic methods can be securely integrated into DNNs is vital. This research could significantly impact the design of secure AI systems, enhancing their resistance to attacks while providing necessary cryptographic functionalities.
Overall, the paper underscores the importance of evolving cryptographic practices to accommodate advancements in machine learning, aiming to bolster the security posture of AI applications amidst growing cybersecurity challenges.