12.6.23

Preparing for AI Risk Oversight

Governance experts break down what’s at stake.

When companies like OpenAI brought user-facing large language models (LLMs) and generative image generation tools to the mainstream, they sublimated decades of research into something fundamentally more practical than what had been available in the past. Suddenly, people had a model for how this tech could shape our everyday lives. The advent of the public LLM made it immediately clear that so-called generative artificial intelligence (genAI) could change everything — and be everywhere — sooner than anyone imagined.

But just as the possibilities for genAI’s applications are virtually endless, so too are the risks. For all the merited optimism around this technology, corporations are also starting to see how it might affect not only their businesses, but their optics. The rise of user-facing AI gives companies infinitely more ways to hurt themselves and their shareholders. In that way, it’s not all that different from other technological developments of the past few decades. “This is social media and platforms on steroids,” explained David Berger, a Palo Alto-based partner at the law firm Wilson Sonsini and corporate governance specialist. “Ten or 12 years ago, everybody thought the technology could do no wrong… but you've got to recognize that there are now significant risks that they create.”

It’s not just tech companies that are starting to worry. AI was among the central concerns of this year’s monumental SAG-AFTRA strike, as guild members wondered whether studios might start replacing flesh-and-blood actors with AI-backed digital copies. For actors across film and TV, this isn’t just some passing concern: This sort of shift has already happened in the world of digital journalism, where thrifty brands have quietly replaced human writers with AI clones. SAG-AFTRA’s new contracts contain language about AI likenesses, but even that might not be enough to tamp down risk, since definitional problems will only multiply as the technology develops: Where exactly does the boundary fall between a “synthetic” quality in an AI-assisted performance and an authentically human quality? What is copyright infringement?

Few industries are free from these sorts of questions. Experts think it’s only a matter of time before this sort of labor-energizing activism reaches the boardroom. From Berger’s perspective, public companies are “a crisis or two away” from a real wave of shareholder activism around AI. Not all activists are going to be as forward-thinking as the SAG-AFTRA bargaining committee; the more AI-related scandals arise in a given industry — say, workers at a fast food chain being made redundant en masse, or an AI-assisted customer service experience repeatedly shirking sales leads — the more shareholder proposals are bound to crop up. “It's a technology that could kind of get out of control in ways that we don't even know,” said Michael Levin, founder of The Activist Investor and Troop adviser. “It's an unknown unknown, and boards, traditionally, are very ill equipped to deal with unknown unknowns.”

Still, boards have experience anticipating and adapting to new technological developments. Cybersecurity concerns, for example, are now a fact of our internet-enabled world, and corporate governance has had to grow to accommodate them over the past few decades. But according to Levin, the risks around AI are even more fundamental. “In terms of practical risks, it’s not the same as cybersecurity,” he said. “It's something that is existentially important, in the same way as if you asked a board 20 years ago, ‘How do you think of the internet?’ Now, it's become so part of the business environment, so ingrained, that you can’t not think about it. Fish don’t think about water.”

That’s at least partially because AI, in addition to creating its own new risks, can accelerate the everyday risks a business already incurs. It’s a substrate, a medium, as much as it’s its own product. What if, for example, AI-backed algorithms manage to create an online business that’s too addictive or too messy? When the goal is serving as many ads as possible, things may be overlooked, As You Sow CEO Andrew Behar said. “Using AI to help with creating addictive behavior, particularly in youth — that's problematic, that's very risky for the company,” said Behar. Even if the business is social media, which isn’t as explicitly reliant on AI as, say, a computing powerhouse like Microsoft, AI amplifies risk. The amplification capabilities of AI are as present a danger for the fast food industry as for the defense industry.

One might think of the issue in terms of “just transition,” as numerous boards and shareholder advocacy groups have for years framed plans and policies regarding climate change and its various potential risks and impacts. For Behar, preparedness is crucial. “AI has strengthened the labor movement because there's a common enemy, which is ‘the robots coming to take your jobs,’” he said. In a sense, this is nothing new, since workers have been fretting about automation for decades. But the extent of the concern is new, as is the speed at which investors are mobilizing. It’s a testament to the seriousness of the stakes that SAG-AFTRA’s strike was the longest in guild history.

We’re still a few months from proxy season, but activism around AI is already ramping up. The American Federation of Labor and Congress of Industrial Organizations recently hit Apple, Comcast, Disney, Netflix, Warner Brothers Discovery and other media companies with a comprehensive demand for guidelines around the ethical deployment of AI — something the group says will “fight the dehumanization of the American workforce.” And once 2024’s election season gets under way, the externalities will only increase. “There's gonna be pressure on boards,” Levin said. “Their job is going to be to ask a lot of nosy questions when management wants to start using some sort of AI.”

Subscribe to Troop Insights: