TheCorporateCounsel.net

August 11, 2023

AI: Do You Need to Refresh Your Policies?

John predicted last week that the robot overlords are coming for our jobs. In the short-term, though, we will probably have more work on our plates – to train them in and set up the proper oversight systems.

This Mayer Brown blog walks through what boards need to think about as part of overseeing the issues in this brave new world. Basically, it’s “last year’s model with a new coat of paint” – boards need to apply the same fiduciary duties to AI decisions & oversight that we all already know and love. But, that may mean encouraging management to develop new AI-specific policies. For example, the blog says:

Many companies are developing policies and procedures specifically applicable to the use of generative AI by officers and employees. They are updating their corporate policies to address concerns about potential risks and harms in the context of generative AI, such as bias/discrimination, confidentiality, consumer protection, cybersecurity, data security, privacy, quality control, and trade secrets.

In addition, in light of recent Caremark cases, the board needs to pay more attention if AI is a “mission critical” risk. If that’s the case, the blog suggests:

For companies where AI is associated with mission-critical regulatory compliance/safety risk, boards might want to consider: (a) showing board-level responsibility for managing AI risk (whether at the level of the full board or existing or new committees), including AI matters being a regular board agenda item and shown as having been considered in board minutes, (b) the need for select board member AI expertise or training (using external consultants or advisors as appropriate), (c) a designated senior management person with primary AI oversight and risk responsibility, (d) relevant directors’ familiarity with company-critical AI risks and availability/allocation of resources to address AI risk, (e) regular updates/reports to the board by management of significant AI incidents or investigations, and (f) proper systems to manage and monitor compliance/risk management, including formal and functioning policies and procedures (covering key areas like incident response, whistleblower process, and AI-vendor risk) and training.

Note that in July, 7 Big Tech companies agreed to adopt voluntary “guardrails” on AI, which could be a sign of things to come and eventually serve as a framework for others.

The blog also gives some practical suggestions on protecting sensitive information from widely accessible AI models. Those procedures and limitations could be added to policies or could be informal. Lastly, when it comes to “The Beginning of the End” – using AI to prepare disclosures – don’t overlook the continued importance of internal controls:

For public companies using generative AI in financial reporting and securities filings, boards may need to confirm with management that the company appropriately uses generative AI’s capabilities in connection with its internal control over financial reporting as well as disclosure controls and procedures.

Liz Dunshee