Source URL: https://www.theregister.com/2024/12/13/nist_framework_for_ai_presents/
Source: The Register
Title: Doing business in US? Don’t wait for state ruling on AI to act, warns former Senate chief of staff
Feedly Summary: Workday policy expert suggests NIST framework will save you trouble later
The US House and Senate are unlikely to pass federal legislation on the use of AI in business, so users should focus their attention on a new NIST framework in lieu of state-level law, according to Workday’s veep for corporate affairs.…
AI Summary and Description: Yes
Summary: The text discusses the current landscape of AI legislation in the United States, focusing on the unlikely passage of federal laws and the significance of state-level initiatives and frameworks such as the NIST AI Risk Management Framework. This is particularly relevant for professionals in compliance and governance as they navigate evolving regulations around AI usage.
Detailed Description:
The text highlights the political landscape regarding AI legislation in the U.S. It underscores the challenges of achieving federal laws governing AI usage in business while emphasizing the importance of state-level actions and the NIST framework. Key points include:
– **Federal Legislation Unlikelihood**: There is little expectation that Congress will pass substantial federal legislation on AI, even with Republican control of both the House and Senate. The margins in Congress remain close, complicating the legislative process.
– **State-Level Activity**: Notable actions are taking place at the state level, where various states are considering or proposing their own AI legislation. For example:
– California’s Governor vetoed Bill 1047 due to concerns over its approach to AI safety but anticipates future efforts.
– Other states, including New York, Connecticut, and Colorado, are also exploring AI regulations.
– **NIST AI Risk Management Framework**: The author suggests that, given the legislative stagnation, businesses should focus on the NIST framework to manage AI-related risks. Key aspects include:
– The framework is voluntary but offers a structured approach for risk management concerning AI usage.
– It is seen as a way for the U.S. to align with European standards, especially as the EU AI Act is being developed.
– **Engagement & Future Outlook**: The text encourages active stakeholder engagement in developing and refining these frameworks to ensure safety and compliance with evolving AI regulations. Meyer’s insights suggest that while executive actions might change, established bodies like the AI Safety Institute within NIST may persist.
– **Political Implications**: The prospective changes in administration and their potential impact on AI policies indicate that professionals in the field need to stay informed and adaptable to changes in regulations and compliance standards.
This analysis speaks to security and compliance professionals who aim to understand the implications of emerging AI legislation and frameworks, underscoring the need for proactive engagement with evolving standards.