February 4, 2026
AI: Should Risk Factors Warn of Terminator’s ‘Judgment Day’?
Maybe I’m dating myself with this reference. There’s certainly no dearth of movies about AI turning against humans. (I’m not talking about yesterday’s SaaS-stock-pocalypse). If you’re looking to do a thematic movie marathon, here’s a list from IMDB. Some are even kid-friendly, like Mitchells vs. the Machines (a household favorite). But also at this point, why bother? You could just read the news! I’m talking about Moltbook, of course. ICYMI (do you live under a rock?), it’s a new social network where chatbots, and solely chatbots, are having free-form conversations. (Reddit for AI?) The NYT reports:
The chatty bots became the talk of Silicon Valley and an elaborate Rorschach test for belief in the current state of A.I. According to countless posts on the internet and myriad interviews with The New York Times, many saw a technology that could make their lives easier. Others saw more of the A.I. slop that has been filling the internet in recent months. And some saw the early signs of bots conspiring against their creators.
On that last note, we’ve seen that before. Over the summer, Anthropic issued a report on “Agentic misalignment.” It detailed a simulation that resulted in an AI system blackmailing a company manager, which John has coined the “Hal 9000 problem” on the AI Counsel blog.
This risk is not lost on AI company CEOs. Anthropic CEO Dario Amodei just published an essay about the “risks of powerful AI.” He also opens with one of my favorite movies, Contact.
[T]he main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene [. . .] I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.
Yikes! (This is the point where you say, aren’t I reading a securities law blog?) Well, as this is happening, the business news is abuzz about not just two but three AI unicorns racing to IPOs, and I can’t wait to see the disclosures. Specifically, I’ve been wondering (probably because of Matt Levine writing about AI catastrophe bonds) whether the “our AI might destroy humanity” risk might appear in the IPO prospectuses.
Like Matt Levine, I’m being a little tongue-in-cheek here. I mean, the point of risk factors is to describe what makes an investment speculative or risky and to protect the company against lawsuits. “Investment” risk assumes investment is still a thing we can all do, plus who could sue in an AI extinction scenario? How our brokerage accounts or 401ks are performing will be the least of our worries. On the other hand, this is the worst risk, and also something that the CEOs of all of these companies have publicly speculated about, so it would also seem strange if “AI could transform the world for better” isn’t balanced with “AI could also end the world.”
I did spend 5 minutes running a few pretty ridiculous keyword searches on EDGAR, and found some disclosures that “some AI scenarios present ethical issues or may have broad impacts on society,” which is then connected to reputational harm. I guess the idea is “If customers or investors perceive that our AI might destroy humanity, you might lose some or all of your investment.” And I guess that’s the right answer. Of course, it’s written like this:
If we enable or offer AI solutions that are controversial because of their purported or actual impact on human rights, privacy, employment or other social issues, we may experience reputational harm.
OR
Some uses of AI will present ethical issues and may have broad effects on society. In order to implement AI responsibly and minimize unintended harmful effects, we have already devoted and will continue to invest significant resources to develop, test, and maintain our products and services, but we may not be able to identify or resolve all AI-related issues, deficiencies, and/or failures before they arise. Unintended consequences, uses, or customization of our AI tools and systems may negatively affect human rights, privacy, employment, or other social concerns, which may result in claims, lawsuits, brand or reputational harm, and increased regulatory scrutiny, any of which could harm our business, financial condition, and operating results.
Will the pure-play AI unicorns come up with a new way to say this? I hope so.
– Meredith Ervine
Blog Preferences: Subscribe, unsubscribe, or change the frequency of email notifications for this blog.
UPDATE EMAIL PREFERENCESTry Out The Full Member Experience: Not a member of TheCorporateCounsel.net? Start a free trial to explore the benefits of membership.
START MY FREE TRIAL