There’s been a lot of discussion in recent months about generative AI and its implications for, well, everything. In keeping with that, we’ve added an avalanche of resources to our “Artificial Intelligence” Practice Area since the beginning of the year. There are two new additions that I think will give you a taste for the kind of really helpful materials that we’re posting in that practice area. The first is this 49-page Foley memo that takes a deep dive into the legal & operational issues associated with generative AI. This excerpt highlights some of the legal risks:
Another of AI’s most significant legal risks is the potential for bias. AI systems are as good as the data they are trained on. If that data is biased, the AI system will also be biased. This can lead to outcomes that violate anti-discrimination laws. For example, an AI hiring system trained on historical data that reflects biased hiring practices may perpetuate that bias and result in discrimination against certain groups.
Another legal risk of AI is the potential for violating privacy laws. AI systems often require access to large amounts of data. If that data includes personal information, businesses must comply with relevant privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the United States.
The second new resource is this Mayer Brown memo, which addresses the emerging legal frameworks for governing AI in jurisdictions throughout the world & their implications for corporate boards. Here’s the intro:
Currently, there are artificial intelligence (“AI”)-related legal frameworks pending or proposed in 37 countries across six continents. Even within each particular country, multiple governmental agencies are claiming AI as within their jurisdictional reach. For example, in the United States, the Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission, Food and Drug Administration, Federal Trade Commission and the Securities and Exchange Commission each has issued guidance or otherwise indicated through enforcement activity that they view AI as within their respective purview of current regulatory and enforcement authority.
The memo goes on to discuss the policy rationale behind these legal frameworks & the importance of acquainting boards with their requirements. It also offers guidance on ways to help ensure that directors are prepared to provide appropriate oversight in this area.
– John Jenkins