TheCorporateCounsel.net

June 21, 2023

Risk Factors: What are Companies Saying About Artificial Intelligence?

Artificial Intelligence is a topic that’s really exploded into public consciousness this year, so it isn’t surprising that AI risks are also beginning to feature prominently in some corporate risk factor disclosures.  This Bryan Cave blog notes that companies are addressing AI risks either through standalone risk factors or as part of broader risk factor disclosures. The blog highlights the topical areas of these broader risk factors in which AI disclosures appear and provides several examples of standalone risk factors, including this one from DoorDash’s most recent Form 10-Q:

We may use artificial intelligence in our business, and challenges with properly managing its use could result in reputational harm, competitive harm, and legal liability, and adversely affect our results of operations.

We may incorporate artificial intelligence (“AI”) solutions into our platform, offerings, services and features, and these applications may become important in our operations over time. Our competitors or other third parties may incorporate AI into their products more quickly or more successfully than us, which could impair our ability to compete effectively and adversely affect our results of operations. Additionally, if the content, analyses, or recommendations that AI applications assist in producing are or are alleged to be deficient, inaccurate, or biased, our business, financial condition, and results of operations may be adversely affected.

The use of AI applications has resulted in, and may in the future result in, cybersecurity incidents that implicate the personal data of end users of such applications. Any such cybersecurity incidents related to our use of AI applications could adversely affect our reputation and results of operations. AI also presents emerging ethical issues and if our use of AI becomes controversial, we may experience brand or reputational harm, competitive harm, or legal liability. The rapid evolution of AI, including potential government regulation of AI, will require significant resources to develop, test and maintain our platform, offerings, services, and features to help us implement AI ethically in order to minimize unintended, harmful impact.

The blog says that only about 10% of companies in the major indices (S&P 500 and Russell 3000) are currently including a discussion of AI in their risk factor disclosures, but it also points out that companies addressing AI in their risk factors represent a broad range of industries tech & software.

John Jenkins