OpenAI is a nonprofit-corporate hybrid: A management expert explains how this model works − and how
The board is supposed to stop OpenAI from veering from its mission of building technology that benefits humanity.
The board of OpenAI, creator of the popular ChatGPT and DALL-E artificial intelligence tools, fired Sam Altman, its chief executive officer, in late November 2023.
Chaos ensued as investors and employees rebelled. By the time the mayhem had subsided five days later, Altman had returned triumphantly to the OpenAI fold amid staff euphoria, and three of the board members who had sought his ouster had resigned.
The structure of the board – a nonprofit board of directors overseeing a for-profit subsidiary – seems to have played a role in the drama.
As a management scholar who researches organizational accountability, governance and performance, I’d like to explain how this hybrid approach is supposed to work.
Hybrid governance
Altman co-founded OpenAI in 2015 as a tax-exempt nonprofit with a mission “to build artificial general intelligence (AGI) that is safe and benefits all of humanity.” To raise more capital than it could amass through charitable donations, OpenAI later established a holding company that enables it to take money from investors for a for-profit subsidiary it created.
OpenAI’s leaders chose this “hybrid governance” structure to enable it to stay true to its social mission while harnessing the power of markets to grow its operations and revenues. Merging profit with purpose has enabled OpenAI to raise billions from investors seeking financial returns while balancing “commerciality with safety and sustainability, rather than focusing on pure profit-maximization,” according to an explanation on its website.
Major investors thus have a large stake in the success of its operations. That’s especially true for Microsoft, which owns 49% of OpenAI’s for-profit subsidiary after investing US$13 billion in the company. But those investors aren’t entitled to board seats as they would be in typical corporations.
And the profits OpenAI returns to its investors are capped at approximately 100 times what the initial investors put in. This structure calls for it to revert to a nonprofit once that point is reached. At least in principle, this design was intended to prevent the company from veering from its purpose of benefiting humanity safely and to avoid compromising its mission by recklessly pursuing profits.
Other hybrid governance models
There are more hybrid governance models than you might think.
For example, the Philadelphia Inquirer, a for-profit newspaper, is owned by the Lenfest Institute, a nonprofit. The structure allows the newspaper to attract investments without compromising on its purpose – journalism serving the needs of its local communities.
Patagonia, a designer and purveyor of outdoor clothing and gear, is another prominent example. Its founder, Yvon Chouinard, and his heirs have permanently transferred their ownership to a nonprofit trust. All of Patagonia’s profits now fund environmental causes.
Anthropic, one of OpenAI’s competitors, also has a hybrid governance structure, but it’s set up differently than OpenAI’s. It has two distinct governing bodies: a corporate board and what it calls a long-term benefit trust. Because Anthropic is a public benefit corporation, its corporate board may consider the interests of other stakeholders besides its owners – including the general public.
And BRAC, an international development organization founded in Bangladesh in 1972 that’s among the world’s largest NGOs, controls several for-profit social enterprises that benefit the poor. BRAC’s model resembles OpenAI’s in that a nonprofit owns for-profit businesses.
Origin of the board’s clash with Altman
The primary responsibility of the nonprofit board is to ensure that the mission of the organization it oversees is upheld. In hybrid governance models, the board has to ensure that market pressures to make money for investors and shareholders don’t override the organization’s mission – a risk known as mission drift.
Nonprofit boards have three primary duties: the duty of obedience, which obliges them to act in the interest of the organization’s mission; the duty of care, which requires them to exercise due diligence in making decisions; and the duty of loyalty, which commits them to avoiding or addressing conflicts of interest.
It appears that OpenAI’s board sought to exercise the duty of obedience when it decided to sack Altman. The official reason given was that he was “not consistently candid in his communications” with its board. Additional rationales raised anonymously by people identified as “Concerned Former OpenAI Employees” have not been verified.
In addition, board member Helen Toner, who left the board amid this upheaval, co-authored a research paper just a month before the failed effort to depose Altman. Toner and her co-authors praised Anthropic’s precautions and criticized OpenAI’s “frantic corner-cutting” around the release of its popular ChatGPT chatbot.
Mission v. money
This wasn’t the first attempt to oust Altman on the grounds that he was straying from mission.
In 2021, the organization’s head of AI safety, Dario Amodei, unsuccessfully tried to persuade the board to oust Altman because of safety concerns, just after Microsoft invested $1 billion in the company. Amodei later left OpenAI, along with about a dozen other researchers, and founded Anthropic.
The seesaw between mission and money is perhaps best embodied by Ilya Sutskever, an OpenAI co-founder, its chief scientist and one of the three board members who were forced out or stepped down.
Sutskever first defended the decision to oust Altman on the grounds that it was necessary for protecting the mission of making AI beneficial to humanity. But he later changed his mind, tweeting: “I deeply regret my participation in the board’s actions.”
He eventually signed the employee letter calling for Altman’s reinstatement and remains the company’s chief scientist.
AI risks
An equally important question is whether the board exercised its duty of care.
I believe it’s reasonable for OpenAI’s board to question whether the company released ChatGPT with sufficient guardrails in November 2022. Since then, large language models have wreaked havoc in many industries.
I’ve seen this firsthand as a professor.
It has become nearly impossible in many cases to tell whether students are cheating on assignments by using AI. Admittedly, this risk pales in comparison to AI’s ability to do even worse things, such as by helping design pathogens of pandemic potential or create disinformation and deepfakes that undermine social trust and endanger democracy.
On the flip side, AI has the potential to provide huge benefits to humanity, such as speeding the development of lifesaving vaccines.
But the potential risks are catastrophic. And once this powerful technology is released, there is no known “off switch.”
Conflicts of interest
The third duty, loyalty, depends on whether board members had any conflicts of interest.
Most obviously, did they stand to make money from OpenAI’s products, such that they might compromise its mission in the expectation of financial gain? Typically the members of a nonprofit board are unpaid, and those who aren’t working for the organization have no financial stake in it. CEOs report to their boards, which have the authority to hire and fire them.
Until OpenAI’s recent shake-up, however, three of its six board members were paid executives – the CEO, the chief scientist and the president of its profit-making arm.
I’m not surprised that while the three independent board members all voted to oust Altman, all of the paid executives ultimately backed him. Earning your paycheck from an entity you are supposed to oversee is considered a conflict of interest in the nonprofit world.
I also believe that even if OpenAI’s reconfigured board manages to fulfill the mission of serving the needs of society, rather than maximizing its profits, it would not be enough.
The tech industry is dominated by the likes of Microsoft, Meta and Alphabet – massive for-profit corporations, not mission-driven nonprofits. Given the stakes, I think regulation with teeth is required – leaving governance in the hands of AI’s creators will not solve the problem.
Alnoor Ebrahim has served on advisory boards to the impact investing industry, including the Global Impact Investing Network and Acumen. He has previously made a charitable contribution to BRAC, an NGO mentioned in the article.
Read These Next
Assad’s fall in Syria will further weaken Hezbollah and curtails Tehran’s ‘Iranization’ of region
Lebanon – home to thousands of Syrian refugees – has long suffered at the hands of the Assad family’s…
Blood tests are currently one-size-fits-all − machine learning can pinpoint what’s truly ‘normal’ fo
A narrower, more personalized ‘normal range’ could help doctors better diagnose and treat disease…
High rises made out of wood? What matters in whether ‘mass timber’ buildings are sustainable
More architects are using wood construction for large buildings. A resource economist argues any increase…