Capitalisn't

What Everyone’s Getting Wrong About AI, with Arvind Narayanan

Episode Summary

Every major technological revolution has come with a bubble: railroads, electricity, dot-com. Is it AI’s turn? With investments skyrocketing and market valuations reaching the trillions, the stakes are enormous. But are we witnessing a genuine revolution—or the early stages of a spectacular crash? Princeton professor Arvind Narayanan joins Luigi Zingales and Bethany McLean to explain why he believes AI’s transformative impact is overstated. Drawing on his book AI Snake Oil, co-authored with Sayash Kapoor, Narayanan argues that capitalism’s incentives can distort technological progress, pushing hype faster than reality can deliver. They examine how deregulation, geopolitical competition, and private control over data shape the trajectory of AI’s development. They also explore what could happen if the bubble bursts: massive market shocks, exposed structural weaknesses in the economy, and a wave of painful restructuring that could echo the dot-com crash—but on a far larger scale. It’s a conversation that cuts through the hype and asks what’s at stake when an entire economy bets on one technology.

Episode Notes

Every major technological revolution has come with a bubble: railroads, electricity, dot-com. Is it AI’s turn? With investments skyrocketing and market valuations reaching the trillions, the stakes are enormous. But are we witnessing a genuine revolution—or the early stages of a spectacular crash?

Princeton professor Arvind Narayanan joins Luigi Zingales and Bethany McLean to explain why he believes AI’s transformative impact is overstated. Drawing on his book AI Snake Oil, co-authored with Sayash Kapoor, Narayanan argues that capitalism’s incentives can distort technological progress, pushing hype faster than reality can deliver. They examine how deregulation, geopolitical competition, and private control over data shape the trajectory of AI’s development.

They also explore what could happen if the bubble bursts: massive market shocks, exposed structural weaknesses in the economy, and a wave of painful restructuring that could echo the dot-com crash—but on a far larger scale. It’s a conversation that cuts through the hype and asks what’s at stake when an entire economy bets on one technology.

Episode Transcription

Arvind Narayanan: One possibility is that we are in a bubble, and that bubble bursts, but over a period of the next couple of decades or so, we gradually do manage to productively deploy a lot of the applications that are leading to this moment of hype. That would, in many ways, be very similar to the dot-com bubble.

Bethany: I’m Bethany McLean.

Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?

Luigi: And I’m Luigi Zingales.

Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.

Bethany: And this is Capitalisn’t, a podcast about what is working in capitalism.

Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?

Luigi: And, most importantly, what isn’t.

Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.

Luigi: If we’re examining American capitalism today, there is hardly anything more important than AI. In the first half of 2025, AI-related capital expenditures contributed 1.1 percentage points to US GDP growth, outpacing US consumer spending as an economic driver. Investment in data-center construction is projected to surpass investment in traditional office buildings in the same year. And 71 percent of equity venture-capital investments this year are in AI-related industries.

Bethany: AI also is a huge contributor to the stock market of late. All of this implies that if AI turns out to be a bubble or, more simply, an overhyped technology, the US economy could crash down very fast. Those who are old enough can remember the hangover from the dot-com bubble. It was not pretty.

Luigi: Of course you cannot remember.

Bethany: Very funny.

Luigi: Yet there is another, deeper reason why we at Capitalisn’t are interested in AI. As today’s guest wrote in his book, it is ultimately the fear of capitalism, the fear that capitalist incentives will destroy any guardrails to the development of AI, that fuels the fear of AI.

Bethany: Many AI experts end up being AI advocates or, even worse, AI prophets. I searched hard for an AI skeptic, and I landed on Arvind Narayanan, professor of computer science at Princeton University and a coauthor of this very influential book called AI Snake Oil. He also wrote a recent article with a less snazzy name—or a less pejorative name—entitled, “AI is A Normal Technology.”

Back in 2016, he wrote a book about Bitcoin and cryptocurrency technologies. But now he’s moved his interests to the space of AI.

Luigi: But these titles are a bit of a stretch. His book does not say that all AI is snake oil, but only some of it: predictive AI. His article qualifies that AI is as normal as electricity, which was a huge deal. These AI-driven transformations of the economy, as with electricity before, will likely unfold gradually across decades, not years.

Bethany: His article has started an active conversation with “AI 2027,” an article written in April 2025 by another scholar who predicts that fully autonomous AI agents will be better than humans at just about everything by the end of 2027. This piece imagines its impacts on the economy, domestic politics, and international relations. The dialogue is basically around just how transformative AI will be.

Luigi: Without further ado, let’s bring Arvind onto the show.

In the book, you talk a lot about AI and capitalism. Why is this so important, and what is so special about it?

Arvind Narayanan: This is a quote, I think, from Ted Chiang, if I remember correctly: “Fears about new technologies are often really fears about capitalism.”

How is this going to reallocate benefits and costs in our society? Throughout the book, we document over and over—not just in this general, abstract sense, but in specific ways—how the haphazard development and deployment of many AI tools and technologies has been such that companies get the profits, but the costs are often borne by others.

A classic example, of course, is that chatbots were unleashed upon the world, and then overnight, you have educators both at the K-12 level and at the college level scrambling to figure out how to change their testing practices, their curricula, all within a period of a year or even a few months. Now you have an epidemic of students using chatbots. That’s just one example of privatized profits and externalized costs. But this is a very common phenomenon, I think, in modern AI.

Bethany: When you think about some of the traditional brakes on technology being deployed, do you worry that with this arms race that we are entering into with China, and this increasing conflation between the government and the big technology companies . . . This government seems to have decided that it’s a matter of our survival to beat China, which you can extrapolate into meaning turn the tech companies loose, let them do whatever they want. Are some of those traditional means of slowing down deployment going to get overrun? More broadly, is this leading to a confluence between a certain type of corporation and government that is perhaps dangerous for the future?

Arvind Narayanan: That is absolutely something to worry about. In particular, when I hear rhetoric about a Manhattan Project for AGI, for instance—AGI being Artificial General Intelligence—I do think that’s deeply misguided.

We’ve written an essay, for instance, about whatever benefits we imagine there are going to be from artificial general intelligence, it’s not going to be realized in one moment, where you’ve built some silver-bullet technology, but rather over a period of decades, as you diffuse that technology throughout society.

In addition to this problematic confluence, a lot of the thinking around AGI is very short-termist, as if the future beyond the next few years doesn’t really matter. That’s another way in which a lot of this thinking is deeply troubling.

But let me also suggest some reasons for cautious optimism. While, in many ways, I think the US is lagging behind on regulation of AI—we’re seeing very clear harms, for instance, from chatbots and teen mental health and so forth—at the same time, in a lot of cases, the harms that might potentially arise are in sectors that are already highly regulated. That’s something that people often forget when they talk about AI regulation as a Wild West.

A lot of people are worried about irresponsible use of AI in medicine, for instance. But when you look at the evidence, actually, the medical field is extremely conservative when it comes to technology adoption, simply because there is so much regulation. Any medical device has to be approved by the FDA, whether it uses AI or not. There are professional standards. People can get sued if they over-rely on AI tools, whether it’s traditional machine-learning tools or chatbots in the context of healthcare.

That’s the reason why, when I look at some of these panicky headlines like, oh, two-thirds of doctors are using AI, and you dig into the American Medical Association survey from which those headlines come, the data is actually very healthy in terms of what doctors are doing with chatbots and these AI tools. Most of them are using it for things like transcribing dictated notes and so forth, which is very much the kind of thing doctors should be doing even though there are some risks. There can be guardrails around those so that more human time can be freed up for better-quality medical care.

I’m not seeing any evidence of doctors just YOLO-ing it and abdicating their responsibility to patients and delegating their care to chatbots. That sort of thing is not happening. It’s a very conservative, very regulated, very responsible sector.

Luigi: In reading your book and your articles, I am a bit confused. This is not your fault; it’s my fault. I’m a bit confused about what of the following things you are saying. Maybe it’s all of them, but feel free to pick.

Number one, I read that, yes, AI is good, but it is not as transformational as many people make it out to be. That’s one point. The second is, it is going to be slowly adopted, in part because of human inertia, but also in part because of, as you described just now, political resistance at some level and existing regulation.

But the two things must go hand in hand. There are some people, the accelerationists, who think that AI is the greatest thing that will ever happen to humankind. They might not disagree that there is political resistance, but they want to eliminate it all. Part of what the Trump Administration is doing is to gut all regulation: environmental, consumer protection, and the like. Somebody might say, this is what we need, because we need to reach AGI as fast as possible, and anything that gets in the way has to be eliminated.

Arvind Narayanan: I don’t think you misunderstood our views. We’re very opposed to that. I understand that some actors have a strongly deregulatory approach, and we disagree with that. We not only disagree with that normatively—this is not a thing we should do—but also, I don’t think it is happening very successfully.

Certain kinds of environmental regulation are one thing. But, again, it’s not as if you can transform the medical system so that there’s no regulation of medical devices or remove liability laws so that doctors engaging in malpractice don’t get sued. That’s not really a thing you can do and get away with.

Luigi: This is where I think there is some wishful thinking on your part. I don’t think that it is something that is completely out of the feasible set. We know already that some people are thinking about this little nation-state in which everything is permitted, including medical devices without FDA approval. Certainly, no liability for doctors.

There is a part of the Silicon Valley universe that is really, really going in that direction. I’m fearing that unless you merge this with the fact that, actually, AGI is not that feasible or not that great, your warnings are actually fuel to the fire of saying, we should eliminate more.

You’re saying, don’t worry because there is regulation. But they say, actually, we do worry because we want to celebrate when we eliminate all that regulation.

Arvind Narayanan: Yeah, a lot of what the book is about is talking about the importance of having regulation—having more regulation, in many cases. It’s not, “Don’t worry.” I don’t think that’s the theme of the book. There is a lot to worry about, and that’s why we wrote the book.

I also don’t doubt that many people in Silicon Valley do want a completely deregulated environment. I don’t think they’re close to getting it, but I understand that they are pushing for that.

I think we should push back, and part of why we wrote the book is to help us push back by pointing out that we’re not going to get to some kind of utopia in a short timeframe. Therefore, if we dismantle our civil liberties, we’re going to pay a lot of the costs without reaping any of these promised benefits. In fact, we hope that our book can play a small part in resisting those crazy efforts.

The last thing I will say is that if they succeed in doing that, to me, it’s not so much an AI problem, that’s a democracy problem. That’s a much bigger conversation. It’s not so specific to AI harms at that point.

Bethany: To keep pushing on this point, Marc Andreessen, of course, has famously stated that slowing down any AI development will cost lives, and that it’s this moral obligation to accelerate it as fast as possible. Is there anything in what he said, or anything in what anyone has said, that has challenged your view or made you think twice about the core idea that it will be slower, and slower is better?

Arvind Narayanan: Sure. There are lots of potential benefits from AI. We consistently acknowledge that, for instance, self-driving cars have the potential to save a million lives per year, which is the number of lives that are lost in car accidents throughout the world. There are many things to figure out: labor impacts, certain new types of risks that they might introduce. But I do think all of that is worth it because of the sheer number of lives saved at the end.

We try to be clear about that in the book. That is not a call for conflating all different kinds of AI and arguing that no regulation ever has its place. One of the central things that the book tries to do is break apart different applications of AI so that we don’t lump it all under this one umbrella and be clear about what application we’re talking about, what the benefits and the risks are.

The final point I will make is that, for us, a lot of this comes down to the speed of deployment, not so much the speed of development. In our view, it doesn’t matter that much if we accelerate or decelerate the development of general-purpose AI systems.

What matters far more is the speed and the nature of the integration of those AI models and systems into our institutions. We’d much rather have the conversation of, should that proceed faster or slower? The answer is, it depends. In many cases, there is an argument for faster, such as when it comes to self-driving cars. In many other cases, there is an argument for slower.

Bethany: On this theme of development versus deployment, you’ve talked about this idea that AI as it is now can create a set of prizes based on an ability to beat a benchmark. But the easier a task is to measure via benchmarks, the less likely it is to represent the complex contextual work that often defines professional practice.

To explore that more, can you talk about it through the lens of law? That’s one area where, on the surface, you might say that whole industry, particularly at the junior level, is going to get wiped out by AI. AI can do all the brief writing and everything that used to require an army of more junior lawyers. That also feeds into a question of, if you don’t have junior lawyers, how do you train senior lawyers? But I’m going on a tangent. What would you use to best explore that idea?

Arvind Narayanan: A couple of years ago there was this company, Do Not Pay, that started out as a way to dispute parking tickets and things like that. But they had bigger ambitions, and they claimed to have built a robot lawyer. Their pitch was that if any lawyer used AirPods to argue a case in front of the Supreme Court, just repeating what this robot lawyer said, the company would pay them a million dollars because of their service in demonstrating the superiority of this robot lawyer.

Now, this was never real. They surely knew that this was nothing more than a publicity stunt because electronic devices are not even allowed in the Supreme Court. This was never going to happen. But it went viral because, for a lot of people, it at least seemed plausible that this company had managed to build a robot lawyer.

There was none, and they got into trouble with the FTC. You can read the complaint in detail, all kinds of misrepresentations and lies. There was no robot lawyer. But why did it even seem believable? There is so much hype from companies themselves and also press reporting that conflates performance on benchmarks like the bar exam with AI being useful in real-world legal settings.

The simple fact is that a lawyer’s job is not to answer bar-exam questions all day. That is a good example of what I mean by the simpler the task is to measure via benchmarks . . . Bar exams are something that you can grade with yes or no or multiple-choice answers, and so, they tend to be heavily relied upon in the evaluation of AI for various things. But they tend to be very different from the kinds of tasks such as brief writing that lawyers actually do for most of their day.

When you look at those more complex tasks, it’s hard to auto grade. You need actual experts to be grading how well AI is performing at those things. Only very recently are we seeing credible efforts to make those kinds of measurements.

OpenAI, for instance, has something called GDPval, where they actually pay experts to grade AI performance on various reward tasks. That’s extremely expensive to do, right? Thousands of times more expensive to do than automatically measuring AI performance on some benchmarks. That’s why we’ve been very slow to see these more valid ways of measuring how AI can be useful in the real world.

Let me make one final point on this. AI for law is one of my favorite applications to talk about because it helps illustrate many fallacies. OK, yeah, maybe AI is improving at doing things like brief writing. What’s going to be the impact on the legal profession?

These fears are not new. More than 15 years ago, there was a New York Times article looking at some of the simpler types of AI and digital technology that could do a lot of the paperwork that lawyers do. It predicted that the number of lawyers was going to go down. In fact, it’s gone up a lot in that time.

There’s a very simple economic framework for looking at this. Instead of looking at AI’s impact on the supply of legal work, let’s look at the demand for legal work. If it becomes cheaper for lawyers to produce a unit of legal work, what’s going to happen? The number of lawsuits filed is going to go up because that has now become cheaper.

The simple fact is that there is not a fixed amount of legal work in the world, and that is true of many different domains. The demand is actually highly elastic, I think is the term that economists would use. Many of you can probably tell me if I’m using the right terms or not.

These simplistic ways of looking at the effects of AI on the labor market, I think, are very unfortunate. They’ve led to a lot of confusion and misinformation and should be rejected. It’s never a comparison of AI versus purely human skills unaided by technology. It’s always AI versus human plus AI. It’s unclear why a human-plus-AI team would be any worse—and it usually would be better—than AI acting alone. Full automation in most professions, I think, is going to be very much the exception rather than the norm.

Bethany: You have a quote you’ve used a couple of times in your writings, which I’d love to have you explain: “Broken AI is often appealing to broken institutions.” What do you mean by that?

Arvind Narayanan: Way back around 2018 or so—and this is one of the things that eventually led to us writing this book—I observed that hiring-automation companies were advertising their supposed AI systems that could do what are called one-way video interviews.

The pitch was these AI companies would go to HR departments and say, look, you’re getting so many applications, maybe a thousand for each open position. You can’t manually review all of them, so just have each of your candidates upload a 30-second video. Our software is going to analyze not even the content of what they say about their qualifications for the job, but body language, facial expressions, various other things, in order to predict people’s personality, job suitability, and that sort of thing.

It was very clear to me as a computer scientist that there was no known way in which this could work. I started calling out this kind of thing, and that resonated with a lot of people. Eventually, we realized that there was a lot more of these types of AI products that are not just overhyped, but as far as we can tell, don’t seem to really do anything. I called it an elaborate random-number generator.

The puzzle, to me, was why, despite the fact that thinking about it for a few minutes should make clear that this can’t possibly work, this was so appealing to so many of the people who were buying the stuff. It made me realize it’s not because they’re getting fooled. It’s because even if it is a random-number generator, it actually works for their purposes. It allows them to have some seemingly objective way of saying, this is how we filtered down these 1,000 applications to these, I don’t know, 20 applications that you can then do a manual interview for.

They never use it for the kinds of jobs that are valued and paid highly. It’s for things like customer service and tech support and these types of things, where they don’t really seem to care that much about candidates and finding the best candidate for the job.

Our view was that this is one example, and there are many other examples, showing there is already something broken about the process. The hiring process is not working. Whatever process they have now, it’s not allowing them to identify amazing candidates. So, their view is, if this pretense that this AI system is unbiased and accurate is allowing them to cut down on their costs while essentially continuing to do the same thing they were doing before—not care too much about which candidates they were hiring—then it’s a win-win.

Unfortunately, this kind of thing seems to be all too common. It’s not merely a matter of saying this AI system doesn’t do what it’s supposed to do, but looking at the underlying system, seeing what’s broken about it and whether that can be fixed.

Bethany: This might be too basic a summation, but one of the things I’ve been thinking about is that one potential cost of AI is what it wipes out, mainly in the form of jobs. But another potential cost of AI is what it prevents. That would be a more invisible cost in the sense that we won’t be able to see what we’re losing. It’ll just be lost.

Something you’ve written has really stuck with me, this idea that it may prevent real scientific breakthroughs. If we start to rely on this endless loop of iteration in science of just trying to squeeze more out of what we know, instead of that real creative leap toward what we don’t know that has characterized most of the great scientific discoveries, that AI will actually stymie our progress rather than advance it. Am I summarizing that well? How do you think about that?

Arvind Narayanan: That’s right. I do think AI has great potential for science if it is used right. Right now, I don’t think we’re on that path. I think right now, as a scientific community, we are misusing AI more than we are using it responsibly.

That’s for a few reasons. One metaphor that I like is adding more lanes to a highway to try to fix a traffic problem, when the real problem is that there is a tollbooth that everybody has to pass through. By adding more lanes, you’re only incentivizing more traffic, which is going to make the congestion worse. In many ways, I think that’s what we’re doing as a community with AI for science.

Here’s what I mean. What are some of the real bottlenecks to scientific progress? It’s not producing more papers. We already produce millions of papers per year. That number has been growing dramatically over the decades. It’s not leading to true breakthroughs. There has been a lot of hand-wringing about this already in the scientific community. One pattern that’s been observed is a paper by Chu and Evans that shows that when a field gets larger, when papers are coming in at a bigger rate, it actually gets harder for scientists to wrap their heads around everything that’s going on in that scientific community. So, they have a tendency to gravitate towards the most prominent, the most already-popular, the most central ideas in their field. This actually slows down the entry of new ideas that might be radical at first, that might challenge the currently prevailing ideas.

We’ve seen throughout the history of science that when the new, radical idea—the idea that the sun is at the center of the solar system, for instance—was first spoken, it was almost unthinkable, right?

How do you go from radical-fringe idea to the new consensus? That’s the real bottleneck. It’s not accumulating more facts within existing paradigms. It’s realizing that something you thought you knew is actually wrong and moving to the next paradigm that challenges what we think we know.

The way that AI is being deployed is more in the accumulation of existing facts phase. What it’s doing is it’s further increasing this traffic jam where there are too many papers coming in, so that scientists are forced to gravitate toward the comfort of clinging to the ideas that are already popular, which is making it harder for this inherently sociological process.

Determining what is radical and nonetheless worth further experimenting so that it might one day become the new normal, that’s not a job for AI. That is, at least for now, intrinsically a job for humans. Therefore, this avalanche of new, greater productivity, partly due to AI, is actually jamming up that process. There’s more to say on that, but that’s the fundamental, core concern.

Luigi: Speaking of research, you raised a point that is very dear to me, which is the influence of control over the data on the entire ecosystem of research. Today, most of the most valuable data are in the hands of private firms. Number one, they don’t share it widely. Number two, they pick and choose the people whom they share it with to basically shape what kind of research is published.

You claim that other fields do much better, like medicine. I think that is a bit too optimistic. But I think that this is a gigantic problem that we’re all facing, and we don’t have enough antibodies yet.

Arvind Narayanan: Yeah, let me speak to my field, computer science, so that I don’t say anything about medicine that you might find too optimistic or inaccurate. Early on in the history of the field, the idea that computing technology could have negative impacts was not really something that people paid much attention to in the nascent years of computing. this here wasn’t really any necessity to have a culture of computer scientists being independent of tech-company interests and being an external voice of research and accountability.

Sorry to violate my promise of not talking about medicine. I think that is a way in which it’s very different from medicine. On the few occasions where I go give a talk at medical conferences, the seriousness with which they treat conflicts of interest in asking me about all of my commercial relationships with companies . . . They’re, of course. worried about pharma companies.

But I wish that in computer science we had anywhere near that level of scrutiny of our academic computer scientists’ relationships with these technology companies. We don’t. It’s just assumed that every computer-science professor has ties with big technology companies and/or has a startup on the side.

That’s quite unfortunate. Not everybody needs to take some kind of vow of purity and not have anything to do with tech companies, but I do think we need some subset of independent technology experts in academia who don’t take money from and are otherwise intellectually independent of tech companies. We don’t have that today.

All of that contributes to the problem of tech companies owning all the data and making external researchers essentially so subservient because if they want to do good research, they’re so dependent on access to not just data but also computational power from technology companies.

Bethany: I think that’s such an important point. Luigi has gotten me hooked on this idea, too, because I also think it’s one of the things where the external impression is completely different from the reality of it. Most people on the outside would think that most academics are the unbiased arbiters of truth and would not understand all these subtle ways in which they’re not.

Because this is something that I’ve always been interested in, I want to go back to this notion of what element of humanity or what it is in humans that leads to a true leap of genius. It’s kind of like the Kierkegaardian leap of faith. If you think about Einstein or Newton, when people have come up with this entirely new way of visualizing the world, it’s just been a radical departure. Sometimes the math hasn’t even been there to justify it. It’s just a way of seeing the world that’s almost philosophical.

Is it that our current reliance on LLMs can never get there? Is it that we don’t know whether they can or not because nobody fully understands why LLMs spit out the answers? If there is a way to capture human genius, do we need to fundamentally rethink AI?

Arvind Narayanan: The simple answer is I don’t know. But I can share some thoughts on that. One, I can say that whatever that spark is, I don’t think current LLMs have that. I do want to clarify that I’m not saying that it’s impossible. I do think it would require different approaches in at least two ways.

One is that if you want to increase creativity without turning it into just a random-number generator, to have creativity while also retaining some kind of grounding, that would be a very different kind of product from a chatbot that’s useful to everyday users. You can’t expect both of these from the same system. An LLM or any kind of AI model or system that had that level of creativity is just going to have too much randomness to rely upon for everyday use, when someone just wants it to do some shopping or whatever.

I think scientists are notoriously poor at doing everyday, mundane tasks. I don’t know, maybe there is something deeper to it. A lot of creativity is just randomness, frankly, just exploring lots of random paths. That just makes them very poor assistants for everyday users.

For that reason, that’s not necessarily going to come from the companies that are building and monetizing these chatbots on a scale of billions of users. Maybe that has to come from a different kind of research effort.

I know that there are some such research efforts out there, but the scale of it just completely pales compared to how much effort is being put into making these chatbots more engaging for everyday users. That, to some extent, belies all of this talk that we’re building AGI because we want to cure cancer. If that’s what you really wanted to do, you would be putting a lot more effort into making AI actually useful with regard to some of their scientific limitations as opposed to making them useful for everyday users.

Luigi: You use a very apt adjective, “subservient,” referring to academics vis-à-vis Big Tech. I fear, and I think you mentioned it in your book as well, that this is not only true of academics but also of journalists. This is particularly problematic because we’re in a moment in which AI is hyped a lot. One of the functions of true journalism is to share the bad news. Bethany has done her fair share of this—

Bethany: Thanks, Luigi.

Luigi: It’s very important because what keeps the market from blowing completely crazy is, in part, the fact that you have negative news. If you all of a sudden put a muzzle on the negative news for a while, it goes crazy.

I fear that we are in that phase, that we are in a bubble, after all. Every big technological innovation brought so much hype that there was a stock-market bubble. It happened with railways, it happened with electricity, it happened with the dot-com companies. We’re due for one here. Are we in the middle of a bubble? And why?

Arvind Narayanan: I don’t know if we’re in the middle of a bubble. Here’s one possible way in which things might play out. I think it’s quite possible that a lot of the hopes and expectations that have led to this level of investment in AI actually do correspond to real potential, but that these are not things that are going to be realizable on a two- to three-year or whatever kind of timeframe that’s relevant to investors, especially considering that GPUs depreciate on a very rapid timescale.

One possibility is that we are in a bubble, and that bubble bursts, but over a period of the next couple of decades or so, we gradually do manage to productively deploy a lot of the applications that are leading to this moment of hype. That would, in many ways, be very similar to the dot-com bubble. A lot of the excitement that led to the dot-com bubble related to things that people hoped that we would be able to do online. What turned out to be premature in the 1990s are things that we take for granted today. I do think that’s possible, although I also think there are lots of differences from the dot-com bubble.

Luigi: There is one point where we strongly agree, which is on Section 230 of the Communications Decency Act. I’ve been saying for years that algorithmic creation should not be shielded from liability. Now, I thought that this shift would require a new law. What I found very intriguing is that you argued that, if properly interpreted, the current version of Section 230 leads to this conclusion. Is that true? Can you explain that?

Arvind Narayanan: I unfortunately can’t talk about this on the record. Since writing the book, I have been retained as an expert witness in a lawsuit against most of the leading social-media companies on this point. I do want to be very careful about what I say on this in public.

Luigi: I’m glad you’re suing the social-media companies. We’re sympathetic to that.

Bethany: That’s actually very exciting.

Luigi: Last question, if we still have time. The one point where we are not on the edge, you’re clearly optimistic, is you don’t think that the world will end in 2027. I’m referring to “AI 2027,” which has a pretty scary view of the world. Why is the scary view wrong?

Arvind Narayanan: Look, we can’t be sure that the scary view is wrong. I’m glad that they’re doing the work that they do and are talking about the kinds of scenarios that they think more people should think about.

I do think researchers should be thinking about those kinds of scenarios. We think a lot about those kinds of scenarios. But I think it relies on a whole set of assumptions. It relies on AI developers and researchers being able to get to human-like AGI in a very short span of time.

We’ve been doing a lot of benchmarking of these AI agents, and we see very clearly that even though they can superficially solve a lot of tasks that are associated with human intelligence, the way they go about it is just so far from the flexibility, generality, adaptability, resilience of human intelligence. When you take them slightly out of the kinds of inputs they’re used to, the performance of AI systems can catastrophically drop.

We’re seeing so many limitations, and we don’t think this is just a matter of scaling or even one scientific breakthrough away. This is many scientific breakthroughs away. I do think, just on the tech side, we have a lot of time. That’s one point. That’s not even the main point.

Our main point is that it doesn’t even matter so much if the capabilities of AI systems improve so rapidly. We have a lot of agency as individuals, as companies, as institutions, as policymakers in how we choose to deploy these things. These risks are going to be realized when we deploy AI systems, not just when we develop them.

We have a whole essay called “AI as Normal Technology,” which is also our newsletter that we’re developing into a book, which really talks about how to exercise this agency. It’s not a matter of saying everything is definitely going to be OK, but it’s a matter of, here are things we can do collectively in order to ensure that things stay on a good path. It’s a prescription more than a prediction.

Bethany: A lot of his arguments rest on the idea that there are systems in place and structures in place in our world that will stop us from having to confront these questions. I think it rests on a second assumption that those systems that will make us have to confront those questions slowly are also good. That friction, for lack of a better word, is good. Does that sound like an accurate summary?

Luigi: Yeah, absolutely. I think that he assumes that this is in place, when I think part of the crucial debate is, from a societal point of view, what should we do? If you just assume the technology will be slow, by definition, there’s nothing to be done. It’s kind of a positive, optimistic laissez-faire, assuming that the government works very well.

Bethany: Yes.

Luigi: Maybe because I’m pessimistic, I don’t think that the government works very well, especially when there are powerful forces that want the government not to work in that direction. It seems to completely ignore that there is massive pressure from Silicon Valley to get rid of all these frictions.

Bethany: It’s funny, I’ve also been subscribing to this AI newsletter. I’ve been thinking increasingly that the ways in which AI is getting integrated into things, it’s just being done in these ways where it’s not as obvious as he thinks it’s going to be. Because it’s not as obvious, it’s sort of creeping in around the back door.

Therefore, because it’s creeping in through the back door, the systems won’t be in place to prevent it. I really want to believe in the simplicity of his viewpoint, or the seeming simplicity of it, that deployment will be different than development. But I didn’t come away convinced from the conversation that that was right.

Luigi: If he says that, at the end of the day, AI is such a “normal technology,” then the acceleration that we fear or we desire might not be there, and so, even the pressure to think hard about this societal trade-off disappears.

Bethany: Yeah. That’s dangerous in and of itself, right? Normalizing AI might actually be worse than not normalizing it because it’s so comforting to all of us to believe that it will be like electricity, when it may not be. I think what you’re getting at is that it’s false comfort.

Luigi: Yeah, absolutely. But first of all, I want to be very clear, I did learn a lot from his writing, and it did change my mind a bit. I think that his notion that AI might be a “normal technology,” not so accelerating, might be true. ChatGPT-5 was not such a leap forward from ChatGPT-4 as people expected. Maybe this technology, at least the current system of LLM models, is reaching the point of decreasing returns to scale. Yes, it is going to continue, but it’s not going to continue at an accelerating pace, as we have seen in the last few years.

Bethany: Well, I want to believe that, I think because I’m pro-human and because I do worry deeply about the impact of AI at accelerationist speed on our economy. I don’t think there’s any way, given the current state of our society, that we’ll have anything in place that will deal with it. If it is what its proponents say, it will be like the China shock times, I don’t know, 10,000 times, a million.

My bias is to believe that, but I’m not really sure it’s right. Did you see there was a new gauge published by OpenAI last week called GDPval? It’s basically evaluating leading AI models on real-world tasks that have been curated by experts from across 44 different professions.

My long-time friend and former colleague Jeremy Kahn, who writes for Fortune and is a great thinker about AI, wrote a piece about it that was eye-opening to me. His point is, basically, that some of these studies showing that AI workslop is a drag on productivity and that there aren’t really any advances being made are often not quite what they’re cracked up to be.

I’m really not sure what to think. Even as we’re all taking comfort, for example, in his point of view that LLMs can only do what a lawyer might be asked to do on the bar exam but can’t do what a lawyer does in real life, there’s work being done that would suggest that it’s already changing, even as we’re looking at it.

Within 15 months, the models went from being barely credible to producing work that’s rated as comparable to lawyers almost half the time. But still, this piece I’m reading now would end up supporting our guest’s point of view. GDPval is still focusing only on written deliverables, and lawyering is so much more than that. Negotiations, courtroom advocacy, client counseling, ethical decision-making, they’re all outside of this frame.

Luigi: Did you say ethical decision-making?

Bethany: Yes. Ethical decision-making. I did say that. I think it comes back to what you said. Every single thing has to be very grounded in the specifics. For any task, it’s just really hard to generalize.

Luigi: Yeah, that sounds a bit like historians. They always say, it depends.

Bethany: I know, I know.

Luigi: But I think that we can generalize one thing, that it is going to be disruptive for any kind of job. The thing that is difficult to establish is for some jobs, the disruption will be mostly positive because you enhance your productivity. For others, there will be a lot of displacement.

This is not a reason to block the benefits of AI, but it is a reason to think very hard how we can soften the societal consequences that these technological changes will bring about.

I like very much his position vis-à-vis the famous Section 230 of the Communications Decency Act. He shares my view that you should make companies liable when they actively edit in the form of promoting some posts versus others.

Now, he claims that you can even use the existing law to make them liable. Unfortunately, he didn’t want to talk about it because he’s an expert witness in a case like this. But I think that that would be very, very useful.

He also has a very thoughtful position that we didn’t have time to discuss copyright issues. He basically says we need to have the political system intervene in the trade-off between the incentive to protect the producers of art and the incentive to distribute them to the largest number of people. It’s very much driven by the existing technology, and the existing technology has changed. We need politically to have this discussion.

Unfortunately, we don’t have either of the two sides ready to have this discussion in a calm way. I fear that we’re going to try to limp along with outdated institutions. That, to me, is the greater frustration.

Bethany: I did also think, separately, that you raised an interesting question that there’s a lot of debate about today, which is, is AI in a bubble?

I do think that is a separate question from what the transformative effects of AI will be, because I think it’s possible that both things are true, that AI is in a huge bubble, and that the collapse of that could be a first-level wave of damage before we even get to the damage wrought by AI taking people’s jobs.

In other words, it’s possible that there will be all sorts of levels of damage from a stock-market wipeout that turns into an economic wipeout, before we even get to the real problems caused by our jobs going away.

Maybe that’s actually the most pessimistic take possible, that yes, AI is in a bubble. Yes, the wipeout is going to be terrible. And, yes, AI is also fundamentally, incredibly transformative, and it’s going to take all our jobs.

Luigi: I think it would be very painful. I think we’re going to have all the problems that are being created by Trump pushed under the rug because we have this big bubble running the economy. And so, we’re going to have the bursting of the bubble with the normal consequences and the emergence of all these underlying problems, so that this double coincidence will be very painful. The possibility is that this will start a lot of restructuring that eventually will be positive. But in the short term, I see pain ahead.

Bethany: Yeah, I think it goes back to the conversation we had about Japan, actually, which is that in many cases, bubbles are masking underlying weakness. I worry that the current AI bubble is masking underlying weakness in our economy. When it bursts, that underlying weakness will also be part of the problem.

There are these incredible numbers from Michael Cembalest at JP Morgan. He wrote that since the release of ChatGPT in 2022, AI-related stocks have accounted for 75 percent of S&P 500 returns, 79 percent of earnings growth, and 90 percent of capital-spending growth.

If all of that collapses and goes away, it’s going to be extraordinarily painful on a scale that puts the dot-com bubble to shame. A friend of mine actually reminded me recently, remember Pets.com, how that was the celebrated collapse from the dot-com era? Guess what Pets.com’s peak market cap was? About $300 million. Tiny, tiny, tiny.

You contrast that to, what is NVIDIA’s market cap right now? As we’re recording this, NVIDIA’s market cap is $4.56 trillion. The scale is entirely different, so, therefore, the damage will be a lot bigger.

I also think our economy is in worse shape. The AI bubble is hiding a plethora of sins, including a rising and unsustainable debt level. I think it could be quite ugly. How’s that for Monday-morning negativity?

Luigi: Yeah, I think it’s very negative, but as always, being negative sometimes means you might be right.

If you want to think about the biggest change in technology, it was the introduction of mass production that Ford brought about, which really became diffused in America during the period of the Great Depression. One of the issues during the Great Depression that most people are not aware of is that there were massive increases in productivity during the Great Depression.

Bethany: Huh.

Luigi: They didn’t manifest in more people having more wealth. But from a technical point of view, it was an extraordinary period of growth.

It is possible, as you said, that after the crash following the AI bust, there will be a lot of restructuring, tremendously increasing efficiency, but with a lot of people unemployed. I think that those two things are possible.

To be fair, I think moments of crisis are the easiest ones to restructure. I always tell my students an example of technology. At the beginning of World War II, the French and the British were defeated by the Germans because the Germans knew how to use the tanks in a much more efficient way in waging war than the Brits and the French.

There is nothing that makes you learn as fast as a military defeat. The British and, later, the Americans learned how to use the tanks pretty fast. But without a defeat, it is hard to change. Unfortunately, economic downturns have this positive effect, but they are very painful.