September 25, 2024
Board Oversight: Managing AI Risks
Providing appropriate oversight of the key risks that companies face is one of the board’s most important roles, and one that is made increasingly difficult by the challenges presented by artificial intelligence and other emerging technologies. This Skadden memo offers some guidance to help boards ensure that an appropriate oversight program is in place for AI-related risks.
The memo surveys the current regulatory landscape for AI and the risk management tools available to corporate boards, and offers up the following guiding principles for AI corporate governance:
– Understand the company’s AI risk profile. Boards should have a solid understanding of how AI is developed and deployed in their companies. Taking stock of a company’s risk profile can help boards identify the unique safety risks that AI tools may pose.
– Be informed about the company’s risk assessment approach. Boards should ask management whether an AI tool has been tested for safety, accuracy and fairness before deployment, and what role human oversight and human decision-making play in its use. Where the level of risk is high, boards should ask whether an AI system is the best approach, notwithstanding the benefits it may offer.
– Ensure the company has an AI governance framework. The board should ensure that the company has such a framework to manage AI risk, and then reviews it periodically to make sure it is being properly implemented and monitored, and to determine the role the board should have in this process.
– Conduct regular reviews. Given the rapid pace of technological and regulatory developments in the AI space, and the ongoing discovery of new risks from deploying AI, the board should consider implementing regular reviews of the company’s approach to AI, including whether new risks have been identified and how they are being addressed.
– Stay informed about sector-specific risks and regulations. Given how quickly the technology and its uses are evolving, boards should stay informed about sector-specific risks and regulations in their industry.
The memo points out that the specific AI-related risks that companies face, and their legal and regulatory obligations, differ across industries. Furthermore, the regulatory framework for AI is evolving rapidly and does always provide consistent approaches or guidance. Further complicating matters is the fact that the nature and extent of regulatory obligations also often depend on whether the company is the developer of an AI system or simply deploys it, and that line may be difficult to draw.
– John Jenkins
Blog Preferences: Subscribe, unsubscribe, or change the frequency of email notifications for this blog.
UPDATE EMAIL PREFERENCESTry Out The Full Member Experience: Not a member of TheCorporateCounsel.net? Start a free trial to explore the benefits of membership.
START MY FREE TRIAL