Source URL: https://www.theregister.com/2024/12/18/ai_model_reveal_itself/
Source: The Register
Title: Boffins trick AI model into giving up its secrets
Feedly Summary: All it took to make an Google Edge TPU give up model hyperparameters was specific hardware, a novel attack technique … and several days
Computer scientists from North Carolina State University have devised a way to copy AI models running on Google Edge Tensor Processing Units (TPUs), as used in Google Pixel phones and third-party machine learning accelerators.…
AI Summary and Description: Yes
Summary: Researchers at North Carolina State University have uncovered a novel attack technique that enables adversaries to extract hyperparameters from AI models running on Google Edge TPUs. This side-channel attack could significantly undermine the security of proprietary machine learning models, posing a serious threat to organizations that invest heavily in AI development.
Detailed Description:
– **Overview of the Research**:
– Researchers at North Carolina State University developed a side-channel attack, named “TPUXtract,” to extract hyperparameters from AI models operating on Google Edge Tensor Processing Units (TPUs). This attack taps into the electromagnetic emissions produced by the TPUs during inference.
– **Mechanism of the Attack**:
– The attack measures the electromagnetic intensity while the AI model is executing (inference) and uses those measurements to deduce hyperparameters.
– Hyperparameters, affecting the learning process of AI models, differ from model parameters which are learned during training.
– **Implications of Model Theft**:
– An adversary can reproduce a high-fidelity substitute model at a significantly lower cost compared to the original training expenses. This strikes at the economic foundation of proprietary AI development, particularly for organizations investing billions into model creation.
– The ability to replicate AI models (with an accuracy of 99.91% as demonstrated in their experiments) can lead to unauthorized duplication of innovative AI technologies.
– **Key Technical Details**:
– The implementation requires access to the inference device (like a Coral Dev Board with Google Edge TPU), specialized measurement equipment (Riscure hardware and PicoScope Oscilloscope), and knowledge of the model’s deployment environment (TF Lite for Edge TPU).
– The attack can sequentially extract hyperparameters from each neural network layer while overcoming challenges of previous brute force methods.
– **Potential Security Concerns**:
– The findings reveal vulnerabilities in commercial AI accelerators, highlighting a gap in existing security measures that protect sensitive intellectual property in machine learning.
– The research emphasizes the need for improved defenses against side-channel attacks, particularly for systems that do not use memory encryption.
– **Concluding Remarks**:
– Given that Google is aware of these findings, the implications for industry standards and practices surrounding AI security could be significant. The research serves as a crucial reminder for security and compliance professionals to reassess their defenses against model theft and reinforce protective measures for AI infrastructure.