All too often, capitalism is identified with the for-profit sector. However, one organizational form whose importance is often overlooked is nonprofits. Roughly 4% of the American economy, including most universities and hospital systems, are nonprofit. One prominent nonprofit currently at the center of a raging debate is OpenAI, the $300 billion American artificial intelligence research organization best known for developing ChatGPT. Founded in 2015 as a donation-based nonprofit with a mission to build AI for humanity, it created a complex “hybrid capped profit” governance structure in 2019. Then, after a dramatic firing and re-hiring of CEO Sam Altman in 2023 (covered on an earlier episode of Capitalisn’t: “Who Controls AI?”), a new board of directors announced that achieving OpenAI’s mission would require far more capital than philanthropic donations could provide and initiated a process to transition to a for-profit public benefit corporation. This process has been fraught with corporate drama, including one early OpenAI investor, Elon Musk, filing a lawsuit to stop the process and launching a $97.4 billion unsolicited bid for OpenAI’s nonprofit arm. Beyond the staggering valuation numbers at stake here–not to mention OpenAI’s open pursuit of profits over the public good–are complicated legal and philosophical questions. Namely, what happens when corporate leaders violate the founding purpose of a firm? To discuss, Luigi and Bethany are joined by Rose Chan Loui, the founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits at UCLA Law and co-author of the paper "Board Control of a Charity’s Subsidiaries: The Saga of OpenAI.” Is OpenAI a “textbook case of altruism vs. greed,” as the judge overseeing the case declared? Is AI for everyone, or only for investors? Together, they discuss how money can distort purpose and philanthropy, precedents for this case, where it might go next, and how it may shape the future of capitalism itself.
All too often, capitalism is identified with the for-profit sector. However, one organizational form whose importance is often overlooked is nonprofits. Roughly 4% of the American economy, including most universities and hospital systems, are nonprofit.
One prominent nonprofit currently at the center of a raging debate is OpenAI, the $300 billion American artificial intelligence research organization best known for developing ChatGPT. Founded in 2015 as a donation-based nonprofit with a mission to build AI for humanity, it created a complex “hybrid capped profit” governance structure in 2019. Then, after a dramatic firing and re-hiring of CEO Sam Altman in 2023 (covered on an earlier episode of Capitalisn’t: “Who Controls AI?”), a new board of directors announced that achieving OpenAI’s mission would require far more capital than philanthropic donations could provide and initiated a process to transition to a for-profit public benefit corporation. This process has been fraught with corporate drama, including one early OpenAI investor, Elon Musk, filing a lawsuit to stop the process and launching a $97.4 billion unsolicited bid for OpenAI’s nonprofit arm.
Beyond the staggering valuation numbers at stake here–not to mention OpenAI’s open pursuit of profits over the public good–are complicated legal and philosophical questions. Namely, what happens when corporate leaders violate the founding purpose of a firm? To discuss, Luigi and Bethany are joined by Rose Chan Loui, the founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits at UCLA Law and co-author of the paper "Board Control of a Charity’s Subsidiaries: The Saga of OpenAI.” Is OpenAI a “textbook case of altruism vs. greed,” as the judge overseeing the case declared? Is AI for everyone, or only for investors? Together, they discuss how money can distort purpose and philanthropy, precedents for this case, where it might go next, and how it may shape the future of capitalism itself.
Show Notes:
Read extensive coverage of the Musk-OpenAI lawsuit on ProMarket, including Luigi’s article from March 2024: “Why Musk Is Right About OpenAI.”
Guest Disclosure (provided to The Conversation for an op-ed on the case): The authors do not work for, consult, own shares in, or receive funding from any company or organization that would benefit from this article. They have disclosed no relevant affiliations beyond their academic appointment.
Rose Chan Loui: There’s so much money to be made. So, then, do you say it’s better for philanthropy to take their share of that money and go, or is it really important to follow the purpose? The purpose is very clear.
Bethany: I’m Bethany McLean.
Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?
Luigi: I’m Luigi Zingales.
Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.
Bethany: This is Capitalisn’t, a podcast about what is working in capitalism.
Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?
Luigi: And, most importantly, what isn’t.
Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.
Luigi: All too often, capitalism is identified with the for-profit sector. One important organizational form, which is often ignored, is a not-for-profit. Roughly 4 percent of the economy is run by not-for-profits. It seems small, but most universities and a lot of hospitals are not-for-profits.
Bethany: And, of course, a once not-for-profit is at the very heart of one of today’s biggest debates, which is the future of OpenAI.
OpenAI is the company, of course, that created the famous ChatGPT. It was founded in 2015 as a nonprofit research lab with a mission to develop artificial general intelligence, called AGI, for the benefit of humanity. The quote was, “unconstrained by a need to generate financial return.”
Luigi: OpenAI technically was, initially, a tax-exempt 501(c)(3) nonprofit, a Delaware non-stock corporation. It relied on large donations to fund research, and as a not-for-profit, it had no owners or shareholders. It was governed by a board of directors tasked with ensuring that any assets and work served the public benefit.
Bethany: By 2019, however, OpenAI realized that its structure wasn’t conducive to its end goals. It realized that achieving its mission would require far more capital than philanthropic donations could provide.
To reconcile its altruistic mission with the need for more funding and incentives, OpenAI created this subsidiary with a hybrid capped-profit model. The cap is 100 times returns for the earliest investors. In practical terms, this means an investor who put in $10 million can receive at most—and I’m laughing as I say at most—$1 billion back in profit distributions, a 100-fold increase. Any profits beyond that point are owned by the original OpenAI nonprofit entity.
Luigi, we’re already getting really complicated here.
Luigi: I think it’s very simple. It’s almost infinite return. A hundred times is pretty damn large.
What is interesting is that the employees working on product development and research were transitioned from the not-for-profit to this for-profit subsidiary. This allowed OpenAI to issue equity to investors and also to employees as incentives.
However, the essential point is the not-for-profit maintained ultimate voting control and governance over the for-profit entity, ensuring that the mission stays paramount.
In particular, they went into excruciating detail with the operating documents, legally binding the for-profit to the not-for-profit mission of developing safe, beneficial AGI. Every investor and employee who holds equity must agree to these mission-aligned terms encoded in the partnership agreement.
In other words, pursuing the charitable purpose is a contractual obligation of the for-profit operations, even though it can generate and distribute profits.
Bethany: At least, that’s how it was supposed to work. But as most of our listeners know, in November 2023, the board of the not-for-profit fired Sam Altman, CEO of the for-profit unit, because he was not consistently candid in his communications and the board no longer had confidence in his ability to lead.
The way that played out is that it showed that the board ultimately was toothless. When most of the employees threatened to leave if Altman was fired, he was hired back, but he asked in exchange to replace all the personnel on the board of the not-for-profit.
The new board announced that it wanted to change OpenAI from a not-for-profit to a public-benefit corporation, which is a for-profit entity, but one that is supposed to be allowed to take into account considerations other than fiduciary duty in pursuing its business goals.
It’s a funny thing about this issue, isn’t it? It’s so mired, in some ways, in these issues that seem arcane and potentially not that interesting around for-profits and not-for-profits in the law, and yet what’s at stake is enormous. It’s everything that we’re all talking about in terms of the future of AI.
To help us navigate this challenging legal and philosophical world of not-for-profits and the law, we’re bringing to the show Rose Chan Loui, the founding executive director of the Lowell Milken Center on Philanthropy and Nonprofits. She’s also the author of the paper “Board Control of a Charity’s Subsidiaries: The Saga of OpenAI. “
Luigi: First of all, OpenAI’s mission is to ensure that artificial intelligence benefits all humanity. My understanding—and correct me if I’m wrong—is that one of the proposals is to transform this charitable organization into a public-benefit corporation with the same purpose.
Somebody could say, “What’s the big deal?” At the end of the day, they maintain their purpose.
Rose Chan Loui: Right. They are proposing to transfer or convert the for-profit operations to a Delaware public-benefit corporation. A Delaware public-benefit corporation is still a for-profit corporation, essentially giving the corporation permission to consider public goods, some mission that is aside from maximizing shareholder profits.
Bethany: It seems like that sits in a really uncomfortable place given that it’s an aspirational goal. It’s not the clear commitment to a fiduciary duty of a for-profit company, but it’s also not the clear commitment to purpose of a charitable company. If it’s just aspirational, what does it actually mean?
Rose Chan Loui: It’s good PR.
Bethany: Oh.
Luigi: That’s a very good and concise answer. We like that.
Bethany: That is an extraordinarily clear answer.
Luigi: Generally, you don’t get those from lawyers, I have to say.
Rose Chan Loui: Their idea for the nonprofit will be that in the proposed restructure, it will no longer control the for-profit operations. Rather, it will become what I call a typical corporate foundation.
Every big company has a foundation, and they give out grants—still controlled, really, by the corporation in terms of what kinds of grants they want to give. A lot of times, there’s a line that’s pretty blurred. They’re not necessarily focused on it because it’s charitable and something good to do, but because it has some benefit to them marketing-wise.
Here, they’re just saying the nonprofit will support such charitable initiatives as education, health, and science. There’s no mention made anymore of its original purpose of ensuring that AI development benefits humanity. I just don’t think you can change that.
Now, when they issued this latest blog post saying, “We’re going to have this committee, and this committee will help us decide what the best ways are to spend our money,” I think that is definitely a public-relations ploy. It’s window dressing to make it look like we’re going to be just the best foundation ever.
Absolutely, they could do a lot of good, but our position is, there’s still a lot of reason to have a nonprofit that is focused on making sure that AI is developed safely and for the benefit of humanity.
Luigi: But in a charitable organization, you have a duty to pursue your purpose. What happens if you don’t?
Rose Chan Loui: Well, there are two entities that are most responsible. In this case, it’s the Delaware attorney general and the California attorney general. The Delaware attorney general has authority because they formed all of their entities, including the nonprofit, in the state of Delaware. They are responsible under what’s called the internal affairs doctrine for purpose, as well as for governance issues.
The other state that has authority is California, and California asserts authority because the charitable assets, in this case, are almost all in California.
There’s some overlap, and that’s what makes this a little bit interesting because the Delaware attorney general—somewhat to our surprise, because they’re generally a very hands-off state—filed an amicus brief in the Elon Musk litigation against OpenAI.
They advised the court that they are investigating this proposed restructure and that they’re very interested in making sure that fiduciary duties are being complied with, that charitable purpose is being followed, and that charitable assets are being used in accordance with purpose, as well as looking at process for how, for example, fair-market value might be determined, et cetera. They’re definitely asserting their authority.
I’ve not seen anything public with respect to next steps. What are they actually thinking, now that they’ve had a little bit of time to look at it?
On the California side, they have declined to jump into the litigation, on the grounds that it’s in federal court. No one there, they say, has the power to pull them into that suit.
The plaintiffs are trying to explain that we’re not treating you as defendants. We’re naming you as a party because this is an issue you have reason to be concerned about. We’re waiting for the judge to decide whether or not she will force California to come into that case. On the other hand, they have acted and requested documents and information from OpenAI.
Bethany: I have a three-part question.
Rose Chan Loui: OK.
Bethany: Why do you think Delaware is getting involved, and why do you think California is reluctant to get involved? Is there any history with either state as to enforcement on this issue? Is there any precedent that has been set, or is this a completely unprecedented situation?
Rose Chan Loui: With respect to Delaware, they’re generally a very laissez-faire state, but I don’t actually know why they’ve jumped in. Maybe there’s less to lose there politically because while they have this reputation for being laissez-faire, I don’t think they’ll really hurt that business by being involved in this pretty unique situation.
California normally is, as I said, aggressive about enforcing charitable assets and following charitable purpose. But when you think about how much OpenAI is valued right now, the latest one being $300 billion, having them in the state of California and with the potential threat to go elsewhere matters.
Luigi: Can I advance another hypothesis? As Bethany knows, I’m always very cynical. I noticed that in the proposal that OpenAI made to transform itself into a charity, there was a lot of discussion about charitable initiatives—guess where? In California, as if humanity starts in California and ends in California.
I think they’re trying to cajole the AG. The attorney general is an elected official in California and is due for re-election, I think, at the end of next year. That’s a pretty valuable position.
Bethany: You could also advance that conspiracy theory one step further, then, and say that supporting Elon Musk in Delaware is not the world’s worst idea, given some of the other, broader politics at work here.
We should stay away. Luigi and I both like our conspiracy theories.
Rose Chan Loui: Right. Well, their concern is that they be first to achieve AGI. I don’t quite understand this. You might understand this better, but the AI people—in fact, including Sam Altman’s statements—say that when we get to AGI, it’s going to change the world fundamentally. He says, “Money probably won’t matter.”
It’s hard for me to actually grasp what that looks like. But if they’re right and this could happen in the next few years, I don’t think we want it to be the Wild West, and I don’t think we have the regulatory framework in place to make sure that there aren’t possibly disastrous consequences.
Bethany: Is there any argument, both legally and morally/philosophically, that OpenAI lost the ability to control this, simply because of the sheer number of other people working on this, and that, therefore, this whole idea that they had some degree of control over this is just gone? So, the world did dramatically change.
Rose Chan Loui: Yes, yes, yes. That comes down to trust in that nonprofit board and whether they really can withstand all the prevailing winds to make a profit and to win the race to AGI.
Luigi: No, wait a second, we’re talking about two different things. One is, imagine they see that they are losing the race to develop AI first. My understanding is the charter is very explicit. If they realize that somebody else could do it, they should actually join forces with this other group and make sure this is done safely.
Rose Chan Loui: Yes.
What I don’t understand is why they need to gain control over the process in order to do what they want to do. First of all, let me make sure everybody understands here that the people who invested there can make up to 100 times what they invested. It’s not like it is a very tight limitation. They’re saying that even with these conditions, they cannot raise more money because the new acquirers don’t gain control.
My only explanation is that a new acquirer must be doing something that violates the benefits of humankind because the only situation in which control by the foundation gets in the way of making money is if that control prevents the investors from directing the company in a way that is very profitable but not very good for humankind. Those are exactly the points where you don’t want them to get control. Am I missing something?
Rose Chan Loui: No, no, you’re not. Like you said, they’ve been raising tons of money with the 100-times cap.
What I’m hearing from previous employees is that they have skipped safety protocols in this race. To your point, they want unfettered ability to develop AGI as fast as they can, but also, since they care about this 100-times cap, they must think that they can make lots more than a trillion dollars.
When we first heard about the cap, we thought the nonprofit was never going to get any money, or it’s like. they only have about $30 million at the nonprofit parent level, with this company that’s worth supposedly $300 billion.
Bethany: Here’s a slightly less cynical way of looking at this, and I can’t believe I’m the one who’s coming out with a less-cynical view, and maybe a defense, of OpenAI.
Even that 100-times cap or that $300 billion are both theoretical. They could vanish tomorrow if OpenAI loses its edge in development. That’s what I meant by my question. When they started this, they believed that they were so far ahead that they had the luxury of being able to bestow their goodness on the rest of humanity and control what was coming.
They lost that luxury. Then, that’s not only a question of the money they stand to make because that $300 billion, at the speed at which things are moving, could become zero tomorrow. But also, they somehow lost the ability to set the moral terms of this debate. By converting to a for-profit, that at least enables them to continue to have some influence over which way it goes, rather than just being completely sidelined. Is that far too generous an interpretation?
Rose Chan Loui: No, I think there are a few points. One is going back historically to 2015 and the reason for becoming a nonprofit. At that time, the reason that they wanted to be a nonprofit was actually for recruiting purposes. Apparently, there are not that many AI experts in the world, and they wanted the best and the brightest.
This strategy worked. The story is told that when Greg Brockman went out and made offers to 10 AI researchers at salaries significantly lower than they could make at Google and Microsoft, nine of them accepted it because they had a concern about how AI was going to be developed.
They liked the idea that they would be working for a company that would consider the good of all rather than purely profit. So, it was a very successful recruiting tool.
Then, I think by 2019, it became clear that they could not raise as much philanthropic funding as they needed to do this development work. It’s apparently just very expensive. That’s when they decided they needed a structure that would allow them to bring in outside private investment.
Luigi: But, Rose, if a charter matters at all, the charter is very explicit. The charter says: “Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefits.”
It’s pretty clear that they knew that they had to raise a lot of resources, but they would postpone this for the primary benefit, which is to develop AI for the benefit of humankind.
Rose Chan Loui: Well, I think you really hit the nail on the head in terms of what is primary here. Because if that’s the goal . . . I think Helen Toner is quoted as saying at one point in time, it could be that with this purpose, the right thing to do is shut OpenAI down. But you’ll kill the golden goose—or the goose that lays the golden eggs, I guess, is more accurate—because there’s so much money to be made.
So, then, do you say it’s better for philanthropy to take their share of that money and go, or is it really important to follow the purpose? The purpose is very clear.
Bethany: But I keep coming back to the point, and maybe I’m just completely off in left field, that maybe a clear way to get at what I’m trying to say is that if OpenAI were to shut down and have to say we can’t fulfill our purpose, if Helen Toner’s quote were to be accurate, it wouldn’t matter. It’s done.
Others are already moving ahead, whether they’ve built off OpenAI’s infrastructure and leapfrogged it or whether they’re going in a different direction. It wouldn’t stop AGI; it wouldn’t stop what’s happening if OpenAI disappeared. It would be a blip.
So, then, the question is, if you can’t fulfill that purpose anymore, what’s the next best way to do it?
Rose Chan Loui: Right. Then we get to my other remedies. Under the law, what you’re supposed to do is approximate purpose as much as possible. We don’t do status quo because, already, people are suspicious of whether the nonprofit board can actually fulfill the power that they have within OpenAI right now. There just seems like a lot of management control over them.
What would be the best alternative? Would it be better to just spin off the nonprofit so that it’s not within OpenAI at all, give it all of its fair-market value, interest in the charitable assets, and then you let it do its work, and they continue to fulfill purpose?
If they were truly independent, they could say: “You know what? Anthropic’s doing a great job.” Actually, even though they’re a Delaware public benefit and they’re not legally committed, they’ve got this great trust that makes sure that they are developing AI safely and for the benefit of humanity. Maybe if we give them resources, they will be the ones to develop AGI, and we have more trust in them, but that would require true independence. Maybe that’s the way to go.
I think that the scorched-earth part of fulfilling purpose is a little bit . . . I feel like we’re also all giving up a lot. They arguably could say, “We’ll lose the race, and so the valuation’s not going to be $300 billion.”
Luigi: But they shouldn’t care about the money because they’re not after the money. That’s what a not-for-profit is.
Rose Chan Loui: Yup. That’s the purest point of view. Yes.
Luigi: Imagine I were a board member of the not-for-profit, so I have a fiduciary duty to act in the interest of humanity. Suppose I violate it. What is going to happen to me?
Rose Chan Loui: Well, the attorney general can fire the board if they don’t think that you’re doing the right thing and appoint new board members. That’s one.
Luigi: Fire the board, and they can appoint whoever they want.
Rose Chan Loui: I think so, yeah.
Luigi: But if I imagine that I actually go ahead fast. Silicon Valley is move fast and break things, so they go fast. They scramble the eggs, and you cannot unscramble them. At that point, can you be sued? Do you go to jail? What happens?
Rose Chan Loui: Well, you can definitely be sued. But imagine how messy that would be, the unwinding of all of this. That’s the part that’s a bit interesting here, because they originally said that they had two years for this to be done, this restructuring.
Then, with the latest round, when they announced it last Friday, their new deadline is end of the year. That is before the charitable-trust claims even go to trial in California federal court because the parties agreed to delay trial to the spring.
Luigi: But what is the penalty for the parties involved? If I’m a board member, what is the worst that can happen to me?
Rose Chan Loui: Good question. Other than removal? I think they can impose penalties on you.
Luigi: But under Delaware, don’t they have director’s insurance?
Rose Chan Loui: Yes, they’re probably insured.
Luigi: So, basically nothing. What is this business of fiduciary duty if there is no penalty?
Rose Chan Loui: No teeth, right? I know. The first thing I tell anyone who joins a board is to make sure you have good D&O insurance.
But what if they said the board was negligent? I think your D&O insurance will not cover you if you’re totally negligent. Business judgment goes pretty far, but you can’t just be totally negligent.
Bethany: Right now, is it clear how the money would work, and is it clear how it should work? In other words, let’s say the value is $300 billion. If we were to take the not-for-profit and say it should be spun out completely as its own independent entity, how much of that $300 billion should it get? Should it get some of the valuation? I mean, it can’t because that valuation only exists in the ether, in a way.
Rose Chan Loui: That’s the part I was hoping that Luigi would help me understand. The most aggressive argument for the nonprofit would be: “Hey, we seeded that for-profit. Everything came from the nonprofit originally. We gave you our $140 million; we gave you our employees; we gave you the original IP. Anything that’s produced, we produced.”
But there are all these investors with distribution rights that supposedly are ahead of the nonprofit. That’s the thing. We argue, in theory, that they should receive fair-market value of the charitable assets, but honestly, it’s very hard for me to understand what circle I’m drawing when I say charitable assets. And then how do you value it?
Bethany: How do you value it? Even that $300 billion, it’s a mirage. It could be $500 billion, and it could be nothing in six months.
Rose Chan Loui: Or it keeps going up.
Bethany: But you don’t know.
Rose Chan Loui: Right, right.
Luigi: Actually, I think there are multiple layers. Let’s try to uncover some of them.
Let’s forget for a second the fact that, as you said, Rose, there are some people with priority. Imagine that you need to evaluate two things. What is the value of OpenAI today without any additional investment, and what is the value after the additional investment?
My understanding is, certainly, the charity needs to capture all the existing value today because that’s what has been created. Then I don’t know how to split the incremental value, but they probably deserve at least a fraction of the incremental value that is created by additional investments.
That’s a way to think about it. The problem is, in a world in which you cannot stay still, the brand value of OpenAI today is very hard to tell.
This is where I have to say, I know that you don’t like him, Bethany, but Elon Musk was a genius here. By making an offer for $94 or $90-whatever billion, he’s setting a floor. He’s basically saying, “I’m willing to buy for $94 billion.”
It makes it very hard for a serious board to say, “No, that’s not a floor.” You have to say that Elon Musk’s offer is not serious or that it might compromise the benefits for humanity. But, in fact, my understanding is he is making this offer, and he promises to have a board that will keep maintaining the purpose. It’s making it really hard for Sam Altman to actually do anything else.
Rose Chan Loui: They have to at least address it. And the valuation’s gone up even since then. But, yeah, that’s exactly right. He has set a floor. They have to discuss why it’s not at least that, maybe more now.
I hadn’t really thought about it so clearly either, Luigi. What other reason would there be for wanting to remove that cap when it’s so high already, other than you want to remove the monitoring ability of the nonprofit board?
Luigi: That’s it. They’re afraid that this will slow down. I think Helen Toner is vindicated because she said from the beginning, “You’re going too fast, and you are creating potential problems.”
The answer is, we want to go even faster, but that’s not the mission of OpenAI. OpenAI is very clear that reaching AGI fast is not the solution.
Imagine that you believe all the stuff Sam Altman is saying and that once you have AGI, you’re going to cure all diseases, you’re going to save all the people. Reaching AGI one month earlier is saving millions of people from dying. So, this is so important that we have to do it as fast as possible.
I want your opinion, Rose. I don’t think that’s the mission of the charity.
Rose Chan Loui: No, it’s not.
Luigi: Exactly. The mission of the charity is to develop safe AI, not to benefit humankind in the largest possible way.
Rose Chan Loui: That’s correct. I think their interpretation of it is that they’re not changing purpose. They’re changing how they achieve purpose. They’re only changing their activities, which to me sounds a little bit twisted, but you might have a different opinion.
Bethany: Isn’t that an ends-justifies-the-means argument, in essence?
Rose Chan Loui: Yeah.
Luigi: I think the real end is to make money.
There is a very interesting lesson here because this is an episode of an attempt, to some extent, at private regulation. But when they come close to so much money, there is nothing that holds: the corporate charter, the AG, the legal suit, the fiduciary duty. They all crumble in front of billions of dollars.
Rose Chan Loui: Yes, yes. Profits versus purpose. Well, it’s the title of your podcast.
I talked to someone about how, in 2019, they rewrote their . . . Originally, the statement of purpose was much less specific. It was to raise funding for development to do scientific research in AI. But in 2019—actually, 2020, right after they restructured—they really tied themselves to the mast by saying, “Our specific purpose is to ensure that AI is developed safely and for the benefit of humanity.”
I did ask this person, “Why do they do that?” He said it was very intentional at that time because they had had some incidents where risks were taken. So, they were like, “We’re going to really make sure that this purpose stays in place and that it’s written there.”
It was intentional, and then a few years later, we have so much money here to be made.
Bethany: But at least that would argue that it wasn’t cynical from the get-go, in other words.
Rose Chan Loui: Yeah, yeah. It was a recruiting tool, but that’s OK.
Luigi: If you were one of the early employees, can’t you sue OpenAI for change of purpose? At the end of the day, you basically made a donation by working for much lower wages in the contractual expectation that these things would be followed. If they deviate from that, it’s a breach of contract.
Rose Chan Loui: Well, Elon’s basically trying to do that now as a donor.
Luigi: But my understanding, Rose—and tell me why I’m wrong—is that there is an issue about the standing of Elon because you need to have a clear quid pro quo argument. At least as far as I understand, the lawyers have not identified a smoking gun in this dimension.
But you, Rose, identified a smoking gun for the workers. If it’s true that there was this aggressive marketing, if it’s true that a lot of people left Google, et cetera, and worked only under this pretense, then it’s a clear contractual obligation.
Rose Chan Loui: Or misrepresentation, potentially.
I think it’s going to be an interesting standing case there, too, in the nonprofit. I don’t think there’s a lot of precedent for that. I think what fascinates us is whether they could use something called special-interest standing, where you don’t necessarily have to have a direct interest, but you have an interest, in this situation, in the purpose of OpenAI.
That’s a situation where the court can say: “It looks like the AGs can’t seem to do this. They don’t have either the resources or the political will. So, I will appoint you, group of employees, or I will appoint you, foundation that’s interested in safe artificial intelligence, to bring this case on behalf of the public.”
It’s rarely done, but it is definitely a potential course of action for someone to try to get the court to give them the standing to do that, to represent all of us who care about this purpose.
Bethany: Rose, in the interest of not ending on a sad, depressing note, let’s go for optimism. What is one way in which this could play out well? I’m not even sure how I’m going to define “well.” Maybe a different way to ask it is, what’s the best way this could possibly play out?
Rose Chan Loui: What I’d like to see is some mechanism where we re-empower the board or an outside monitoring committee or group that really has the authority to make sure purpose is still being fulfilled.
One way is that we take the board out and we give it outside monitoring functions. The other way is to clear up the economic interest because right now, it’s blurry. They all agreed, these investors and employees, that the nonprofit purpose controls and that Microsoft only has economic rights. They don’t actually have ownership, which I think was because of antitrust issues.
But what if we just gave the board outsized voting rights? They become minority interests, but we give them outsized voting rights with respect to monitoring the safety part and the development of AI. They can come in and stop something if they see a doomsday scenario happening. That’s one direction.
I guess the other direction would be . . . I go back to Luigi’s point. If they still want to benefit humanity, since money’s not the issue for the nonprofit, that’s not their goal, why wouldn’t they be willing to say: “We don’t need that much money. What we need is control of the AI-development process that we can stop something, if we see you’re not doing it safely and for the benefit of humanity.”
If you’re talking ideal, then that would be my ideal. That’s not the status quo, but whether or not they would agree to that . . .
Luigi: You don’t need to agree. I think the AG can force it to fire the board members for violation of fiduciary duty and, ideally, impose a penalty, all the legal costs, on them, paid personally, and put in some serious people who will actually oversee the risks of AI and maintain the structure as it is. I think that that would be a great victory for the purpose of humanity and the rule of law.
Bethany: I’ll come back to my scary and negative take, which is that the world has already moved past the point where that’s possible, and that’s what they’re grappling with. So, I had to end it on a negative note.
Anyway, this was so much fun, Rose. This was one of my favorite conversations. I thought it was great. Thank you.
Rose Chan Loui: Oh, thank you, Bethany. Thank you.
Bethany: I think I have a little more clarity on what’s at stake after listening to her. I think the big divide here is what’s the right outcome versus what’s the pragmatic and possible outcome? It’s this larger issue that I sometimes get mired in, which is that sometimes the pragmatic is not what’s right, and then, do you need to stick to what’s right instead of what’s pragmatic? Does that make sense?
Luigi: It makes a lot of sense. But there is a saying in Latin, you have to go for the just thing, and whatever happens, you don’t care.
Bethany: Yeah. Well, I think if OpenAI were going for the just thing, then we wouldn’t be having this discussion in the first place.
Luigi: No, no. But as a judge, in administering justice, you shouldn’t be too practical and ask what is convenient. You have to be rigorous.
Honestly, I think that this case is really a test case for the possibility of having a serious not-for-profit sector where a lot is at stake. If you show that in this situation, you cannot maintain your original purpose, then the idea that you can bind anybody with a statement of purpose is completely irrelevant.
Bethany: On the other hand, the pragmatic argument would be the number of times this is going to happen is once in a million. You’re not going to have a case like this where, for a not-for-profit, suddenly, there are hundreds of billions of dollars at stake. Most not-for-profits are not-for-profits for a reason. So, you could also argue that the OpenAI saga is so unique and so distinct in its particulars that it’s not setting any precedent at all. I know, I’m playing devil’s advocate.
Luigi: I actually disagree. I think that, sure, a case with so much money at stake is rare, but I think that, honestly, if the purpose is completely ignored, that casts a shadow on all the corporate charters. They’re just pieces of paper. I would not believe in the law anymore.
Bethany: Yeah. Well, it’s interesting, then, that part of the lesson of OpenAI might be, in the end, the undermining of some of these structures that we all thought were sacrosanct, whether it was the idea that a charter was actually binding or the idea that a board of directors had any power whatsoever.
Luigi: The California attorney general has the future of capitalism in his hands.
Bethany: Yeah. I actually worry, on that note of cynicism, that if the California AG does not, that some of the more cynical motives we discussed could be the reason why not. That’s also depressing, and yet another way in which we’re seeing that the power of money is what matters most.
I sound so naive when I say that because I’m sure some of our listeners are like: “Of course, Bethany. Of course, Luigi. That’s the way it always was.” At least I wanted to believe that some of these structures mattered when push came to shove.
Luigi: Let’s bring it a little bit into your world, Bethany, because one thing I’m shocked by is how little attention this case has received. Maybe I’m exaggerating, but I think a lot of the future of capitalism is riding on the outcome of this case, but you are hard-pressed to find any serious article in any major publication.
Bethany: I think that is due to a couple of things. I think you saw it when a key episode of this whole saga happened, and Sam Altman was fired. We know, sort of, what happened, but there’s been no great piece of investigative journalism from inside OpenAI saying: “Here’s exactly what the issues were. Here’s the blow by blow in the rooms.” I think part of that is that we’re all just moving so quickly that people forget.
Part of that is some of the dismantling of journalism. Part of it is that these issues appear to be mundane and boring on the surface and are anything but underneath. But that’s always something where the press has had difficulty, when the superficial issues are not-for-profit and for-profit status and wrangling in the Delaware court. These are all words that you think, “Ew.”
Luigi: I think what is missing is a bit of sex. If there was a little bit of sex in the story, everybody would pay attention to it. Without sex, it’s difficult to sell stuff.
Bethany: Well, sex or a murder, it can be one or the other. A murder will suffice. You don’t necessarily need sex. A dead body is pretty good, too.
Luigi: I don’t know. I’m more interested in sex than dead bodies, but anyway.
Bethany: Well, I’m glad. I don’t even know where to go with that comment.
Luigi: No, but let’s go to my cynical side, which has another explanation for this. OpenAI, ChatGPT, is becoming the new Google. We don’t know exactly what they pick. As you know, there is a part of the algorithm that is a black box, but also, there is what is called prompt engineering that they do. When you send a request to ChatGPT, the large language model does not process your request. It processes your request filtered by what ChatGPT does.
This is really a conspiracy theory, but OpenAI could say, “Oh, please filter out all the information coming from Bethany.” Then your stuff will never be cited in any report anywhere. As a journalist, you are a little bit afraid to cross OpenAI because you can disappear.
Bethany: Oh, my God. What a way to make this personal. How about academics being disappeared? Let’s turn it around.
I might, though, disagree with you on that front. I haven’t seen OpenAI’s financials, but if CoreWeave, which recently went public, is any indication of this . . . and I did read a pretty exhaustive review of OpenAI, which is that even their most expensive subscriptions to their products don’t cover the cost.
There are two big differences, which is that Google was and maybe still is an incredible profit machine. It’s not clear that OpenAI is a profit machine and can ever be one given the continued capital expenditures required. I think they lack an ability to develop a moat in the way that Google developed a moat. So, I actually don’t think that the comparison is relevant.
You could argue that, in some ways, that makes OpenAI less scary because it is less likely to achieve the monopoly status that Google did and that has been the big-tech feature of so much of our economy for the last decade.
But you can also argue that it makes it scarier and uglier and hence what’s happening now. When there’s an unbridled desperation to succeed and to chase profits at any cost in a field of fierce competition, too often, we’ve seen that that leads to a race to the bottom rather than a race to the top.
Luigi: I think you’re right on the business side, for sure. My concern and my analogy with Google was not necessarily about the profitability, not even about the business model. It was more about the ability to manipulate the information in a way to cut you out.
Your point on the competition is right, because the reason why Google was so threatening is because they were a monopoly. OpenAI or ChatGPT is less likely to be so.
Bethany: But this is where I think the business model actually can’t be separated from that discussion, because if the business model is one that doesn’t lead to profits and doesn’t lead to monopoly, then the issues are different. There are issues, and I think they actually might be uglier issues, but I think it’s a different set of issues. So, I think getting the analogy right on the business model is actually pretty important.
Luigi: One point that we did not discuss with Rose is that one of the arguments they might be using is to say: “We need to arrive at AGI before the Chinese. That supersedes any other consideration. So, as a result, we should proceed in full force with this transformation because that will speed us up and would make us more likely to arrive at that goal.”
Bethany: I’ve been thinking about that because I actually don’t know enough to answer that question. Maybe you do, or maybe it’s worthy of further investigation. But what I don’t know is how much of what is happening inside these large language models is really proprietary, in the sense that DeepSeek, to the best of my understanding, was able to get where it did by ponying off a lot of what OpenAI had done. It wasn’t as if it was developed in a vacuum.
If everything is out there anyway, and if there’s nothing proprietary or very little that’s proprietary, why does who gets there first matter? If China gets there first, but the US can then, two seconds behind, do exactly what the Chinese company did, why does it matter? Isn’t the debate being held on false grounds in that case?
Luigi: I am maybe more originalist in this direction because even if that premise was correct, even if you face that risk, I’m not so sure that you should change the corporate charter. The corporate charter says that you have to benefit humanity. The last time I checked, the Chinese were part of humanity, too. It’s not like a race of Americans against Chinese. It’s a race to discover technology for the benefit of everybody.
Bethany: Right.