The Register: Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling

Source URL: https://www.theregister.com/2025/01/17/nvidia_cisco_ai_guardrails_security/
Source: The Register
Title: Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling

Feedly Summary: Some of you have apparently already botched chatbots or allowed ‘shadow AI’ to creep in
Cisco and Nvidia have both recognized that as useful as today’s AI may be, the technology can be equally unsafe and/or unreliable – and have delivered tools in an attempt to help address those weaknesses.…

AI Summary and Description: Yes

Summary: The text discusses new AI security measures introduced by Cisco and Nvidia, highlighting Nvidia’s Inference Microservices aimed at preventing harmful interactions with AI models, as well as Cisco’s AI defense tools focused on ensuring safe AI deployments within organizations. These advancements reflect companies’ response to growing concerns about AI misuse and security vulnerabilities, providing vital insights for professionals in AI and cloud security fields.

Detailed Description:
The text primarily addresses advancements in AI security technologies from two industry leaders, Cisco and Nvidia. The following key points emphasize the significance of these developments:

– **Nvidia’s Inference Microservices (NIMs)**:
– Introduction of three specialized microservices designed to address AI safety and reliability, part of Nvidia’s NeMo Guardrails collection.
– **Content Safety NIM**: Prevents AI from generating biased or harmful outputs, ensuring ethical responses by analyzing input-output pairs from user interactions.
– **Topic Control NIM**: Keeps conversations focused on approved topics, acting as a regulator against off-topic user prompts.
– **Jailbreak Detection NIM**: Detects attempts to hack into AI models, protecting against prompt injection attacks by analyzing user inputs.

– **Use Cases and Concerns**:
– The need for these NIMs arises from the challenges of preventing unintentional model behavior caused by user instructions, which can bypass guardrails.
– The interlinking of multiple guardrail models might be necessary to cover security gaps and compliance challenges, which raises concerns about resource overheads and latency.

– **Open Source Tool – Garak**:
– Introduced by Nvidia to evaluate AI vulnerabilities like data leaks and hallucinations, further validating the efficiency of the guardrails.

– **Cisco’s AI Defense Initiative**:
– Cisco plans to roll out tools to enhance AI security, including a model validation tool to assess LLM performance and identify security risks.
– Their initiative includes discovery tools for detecting unapproved AI applications deployed within an organization (“shadow” applications).
– Recognition of the potential risks of improperly configured chatbots leading to financial losses.

– **Future Developments**:
– Cisco aims to develop a more cohesive AI toolset with a multi-year roadmap for its security tools, indicating a significant commitment to addressing ongoing information security challenges.

– **Contextual Highlight of Broader AI Developments**:
– A brief mention of Google’s Titans model showcasing advancements in LLM architectures and an FTC investigation into Snap’s MyAI chatbot, evidencing the ongoing scrutiny and evolution in the AI safety landscape.

Overall, the developments presented are vital for security and compliance professionals concerned with the governance of AI. Both Cisco and Nvidia are taking proactive steps to mitigate risks associated with AI deployment and facilitate safer interactions with AI technology. This ensures a holistic approach to integrating AI into business processes while maintaining security integrity.