TheCorporateCounsel.net

April 18, 2024

SEC Enforcement Director Speaks on AI Washing

Last month, I blogged about the SEC’s recent focus on “AI Washing,” or the practice of making potentially false and misleading statements about artificial intelligence as the frenzy continues regarding the impact of the evolving technology. SEC Enforcement Director Gurbir Grewal has now weighed in on how companies can use “proactive compliance” to avoid AI washing problems, in a speech at the Program on Corporate Compliance and Enforcement Spring Conference 2024.

Building on the concepts from a speech last year that articulated his concept of “proactive compliance,” Director Grewal noted that the practice requires three things: education, engagement, and execution. He explained:

First, educate yourselves about emerging and heightened AI risk areas as they relate to your businesses. That means reading the AI-related enforcement actions I mentioned. It means reviewing any future enforcement actions that may follow in this space.

It also means reviewing speeches like Chair Gensler’s recent speech on AI, which highlighted multiple other ways in which a firm’s AI use may heighten risk or implicate the federal securities laws. He specifically discussed the conflicts of interests raised by AI for advisers, the problems presented by AI hallucinations, and the threat that AI could pose to the stability of our markets.

And it means staying abreast of how potential AI-related issues are actually impacting companies in the real world. Take for example, the recent reporting around an airline’s chatbot offering a customer incorrect information about its refund policy.

Second, take what you’ve learned from our orders and public pronouncements, and your own research, and engage with personnel inside your company’s different business units to learn how AI intersects with their activities, strategies, risks, financial incentives, and so on.

Ask: what public statements are we making about our incorporation of AI into our business operations? Are they accurate, or are they aspirational? Does AI present a material risk to our business operations in some way?

Now, is the time to engage.

And third, execute. Does your use of AI require updating policies and procedures and internal controls? If so, are those policies and procedures bespoke to your company? And here, let me be clear: it’s not enough to go to ChatGPT or a similar tool and ask it to produce an AI policy for you.

And then, have you taken the steps necessary to implement those policies and procedures? As we have seen time and again, adoption is only part of the battle; effective execution is equally important and that’s where many firms fall short.

With respect to potential personal liability in connection with AI greenwashing, Director Grewal noted:

Here, I would look to our approach to cybersecurity disclosure failures generally: we look at what a person actually knew or should have known; what the person actually did or did not do; and how that measures up to the standards of our statutes, rules, and regulations. And as I’ve said before in the context of CCO and CISO liability, and I will say it again in the context of AI-related risk disclosures: folks who operate in good faith and take reasonable steps are unlikely to hear from us.

Director Grewal’s speech provides some practical guidance that companies can follow in drafting their AI disclosures, which will no doubt continue to evolve as the technology develops and companies pursue a variety of ways in which to deploy the technology.

– Dave Lynn