Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

Tyler Cowen - Hayek, Keynes, & Smith on AI, Animal Spirits, Anarchy, & Growth

January 31, 2024 1h 42m

It was a great pleasure speaking with Tyler Cowen for the 3rd time.

We discussed GOAT: Who is the Greatest Economist of all Time and Why Does it Matter?, especially in the context of how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, animal spirits, prediction markets, alignment, central planning, and much more.

The topics covered in this episode are too many to summarize. Hope you enjoy!

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(0:00:00) - John Maynard Keynes

(00:17:16) - Controversy

(00:25:02) - Fredrick von Hayek

(00:47:41) - John Stuart Mill

(00:52:41) - Adam Smith

(00:58:31) - Coase, Schelling, & George

(01:08:07) - Anarchy

(01:13:16) - Cheap WMDs

(01:23:18) - Technocracy & political philosophy

(01:34:16) - AI & Scaling



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Listen and Follow Along

Full Transcript

This is a fun book to read because then you just mentioned in there what the original source is to read. You know, it's like the Harold Bloom of economics, right? It's a book written for smart people.
Okay, so let's just jump into it. The book we're talking about is Goat, Who is the Greatest Economist of All Time and Why Does It Matter? All right, let's start with Keynes.
So in the section in Keynes, you quote him, I think talking about Robert Marshall. He says...
Alfred Marshall? Oh, sorry, Alfred Marshall. He says, the master economist must possess a rare combination of gifts.
He must be a mathematician, historian, statesman, philosopher. No part of man's nature or his institutions must lie entirely outside his regard.
And you say, well, Keynes is obviously talking about himself. Because he was all those things, and he was arguably the only person who was all those things at the time.
He must have known that. Okay, well, you know what I'm going to ask now.
So what should we make of Tyler Cowen citing Keynes using this quote, a quote that also applies to Tyler Cowen? I don't think it applies to me. What's the exact list again? Am I a statesman? Did I play a role at the Treaty of Versailles or something comparable? I don't know.
We're in Washington. I'm sure you talk to all the people who matter quite a bit.
Well, I guess I'm more of a statesman than most economists. but I don't come close to Keynes in the breadth of his high-level achievement in each of those areas.
Okay, let's talk about them, those achievements. So chapter 12, general theory of interest, employment, and money.
Here's a quote. It is probable that the actual average result of investments, even during periods of progress and prosperity, have disappointed the hopes which promoted them.
If human nature felt no temptation to take a chance, no satisfaction, profit apart in constructing a factory, a railway, a mine, or a farm, there might not be much investment merely as a result of cold calculation. Now, it's a fascinating idea that investment is irrational, or most investment throughout history has been irrational.
But when we think today about the fact that active investing exists, you know, for winner's curse-like reasons, VCs probably make, on average, less returns than the market. There's a whole bunch of different examples you can go through, right? M&A usually doesn't achieve the synergies it expects.
Throughout history, has most investment been selfishly irrational?

Well, Adam Smith was the first one I know to have made this point,

that projectors, I think he called them, are overly optimistic.

So people who do startups are overly optimistic.

People who have well-entrenched VC franchises make a lot of money,

and there's some kind of bifurcation in the distribution, right?

Then there's a lot of others who are just playing at it and maybe hoping to break even. So the rate of return on private investment, if you include small businesses, it's highly skewed, and just a few percent of the people doing this make anything at all.
So there's a lot to what Keynes said. I don't think he described it adequately in terms of a probability distribution.
But then again, he probably didn't have the data. But I wouldn't reject it out of hand.
Another example here is, this is something your colleague Alex Tobrock talks about a lot, is that innovators don't internalize most of the gains they give to society. So here's another example where the entrepreneur compared to one of his first employees, is he that much better off for taking the extra risk and working that much harder? What does this tell us about? It's a marvelous insight that we're actually more risk-seeking than it's selfishly good for us.
That was Reuven Brenner's claim in some of his books on risk. Again, I think you have to distinguish between different parts of the distribution.
So it seems there's a very large number of people who foolishly start small businesses. Maybe they overly value autonomy when they ought to just get a job with a relatively stable company.
So their part of the thesis is correct. And I doubt if there's really big social returns to whatever those people do, even if they could make a go of it.
But there's another part of the distribution, people who are actually innovating or have realistic prospects of doing so, where I do think those social returns are very high. Now that 2% figure, that's cited a lot.
I don't think it's really based in much real.

It's maybe not a crazy seat of the pants estimate, but people think like, oh, we know it's 2%, and we really don't. So look at Picasso, right? He helped generate cubism with Brock and some other artists.
How good is our estimate of Picasso's income compared to the spinoffs from Picasso? We just don't really know, right? We don't know it's 2%. It could be 1%.
It could be 6%. How different do you think it is in art versus, I don't know, entrepreneurship versus different kinds of entrepreneurship? There's different industries there as well, right? I'm not sure it's that different.
So say if some people start blogging, a lot of people copy them. Right.
Well, some people start painting in a particular style, a lot of people copy them. I'm not saying the numbers are the same, but they don't sound like issues that in principle are so different.
2% overestimate or underestimate? It might be wrong, but in which way is it wrong? My seat of the pants estimate would be 2% to 5%, so I think it's pretty close. But again, that's not based on anything firm.
Here's another quote from Keynes. Investment based on genuine long-term expectation is so difficult as to be scarcely practicable.
He who attempts it must surely lead much more laborious days and run greater risk than he who tries to guess better than the crowd how the crowd will behave. So one way to look at this is like, oh, he just doesn't understand efficient market hypothesis.
It's like before random walks or something. But there's things you can see in the market today where, you know, are the prospects for future dividends so much higher after COVID than they were immediately after the crash? How much of market behavior can be explained by these sorts of claims from Keynes? I think Keynes had the view that for his time, you could be a short-run speculator and in fact beat the markets.
And he believed that he did so. And at least he did for some periods of his life.
That may have been luck, or maybe he did have special insight. It probably wasn't true in general, but we don't really know, did efficient markets hold during Britain at that time? Maybe there just were profit opportunities for smarter than average people.
So that's a view I'm inclined not to believe it. But again, I don't think it's absurd.
Keynes is saying, for people who want money, this is biased toward the short term. You can get your profits and get out.
And that's damaging long-term investment, which in fact, he wanted to socialize. So he's being led to a very bad place by the argument.
But again, we shouldn't dismiss it out of hand. Why is it not easy to retrospectively study how efficient markets were back then? In the same way we can study it now in terms of like, oh, you look at the price to earnings ratios and then what were the dividends afterwards over the coming decades for those companies based on their stock price or something? I don't know how many publicly traded firms there were in Britain at that time.
I don't know how good the data are. Things like bid-ask spread, at what price you actually executed trades, can really matter for testing efficient markets hypothesis.
So probably we can't tell, even though there must be share price data of some sort, at what frequency. Well, is it once a day? Is it once a week? We don't have the sort of data we have now where you can just test anything you want.
He also made an interesting point as like, not only is it not profitable, but even if you succeed, society will look at the contrarian in a very negative light. You will be like doubly punished for being a contrarian.
But that doesn't seem to be the case, right? Like if somebody like Warren Buffett or Charlie Munger, people who do beat the market are actually pretty revered. They're not punished in public opinion.
And they pursued mostly long-term strategies. Right.
But again, trying to make sense of Keynes, if you think about long-term investing, and I don't think he meant Buffett-style investing. I think he meant building factories, trying to figure out what people would want to buy 25 years from that point in time.
that probably was much harder than today. So you had way less access to data.
Your ability to build an international supply chain was much weaker. Geopolitical turmoil at various points in time was much higher.
So again, it's not a crazy view. I think there's a lot in Keynes that's very much of his time that he presents out of a kind of overconfidence as being general.
And it's not general. It may not even be true, but there were some reasons why you could believe it.
Another quote from Keynes. I guess I won't read the whole quote in full, but basically it says, over time, as markets get more mature, more and more of equities are held basically by passive investors, people who don't have a direct hand in the involvement of the enterprise.
And the share of the market that's passive investment now is much bigger. Should we be worried about this? As long as at the margin people can do things, I'm not very worried about it.
So there's two different kinds of worries. One is that no one monitors the value of companies.
It seems to me those incentives aren't weaker. There's more research than ever before.
There's maybe a problem. Not enough companies are publicly held.
But you can always, if you know something the rest of the market doesn't, buy or sell short and do better. The other worry is those passive investors have economies of scale and they'll end up colluding with each other.
And you'll have, say, like three to five mutual funds, private equity firms, owning a big chunk of the market portfolio. And in essence, directly or indirectly, they'll tell those firms not to compete.
It's a weird form of collusion. They don't issue explicit instructions, like say the same few mutual funds own Coke and Pepsi.

Should Coke and Pepsi compete or should they collude?

Well, they might just pick lazier managers who in some way give you implicit collusion.

Maybe this is another example of the innovators being unable to internalize their games as an

active investors who are providing this information to the market.

They don't make out that much better than the passive investors, but they're actually

providing a valuable service.

But the benefits are diffused throughout society.

I think overconfidence helps us on that front. So there's quote unquote, too much trading from a private point of view, but from a social point of view, maybe you can only have too much trading or too little trading, and you might rather have too much trading.
Explain that. Why can it only be too much or too little? Well, let's say the relevant choice variable is investor temperament.
So yes, you'd prefer it if everyone had the temperament just to do what was socially optimal. But if temperament is some inclination in you and you can just be overconfident or not confident enough and overconfidence gives you too much trading.
That might be the best we can do. Again, fine-tuning would be best of all.
But I've never seen humans where you could just fine-tune all their emotions to the point where they ought to be. Yeah.
Okay, so we can ask the question, how far above optimal are we or if we are above optimal? In the chapter, Cain says that over time, as markets get more mature, they become more speculative. And the example he gives is like the New York market seems more speculative to him than the London market at the time.
But today, finance is 8% of GDP. Is that what we should expect it to be, to efficiently allocate capital? Is there some reason we can just look at that number and say that that's too big? I think the relevant number for the financial sector is what percentage it is of wealth, not GDP.
So you're managing wealth and the financial sector has been a pretty constant 2% of wealth for a few decades in the United States with bumps. Obviously, 2008 matters, but it's more or less 2%.
And that makes it sound a lot less sinister. It's not actually like growing at the expense of something and eating up the economy.
So you would prefer it's less than 2%, right? But 2% does not sound outrageously high to me. And if the ratio of wealth to GDP grows over time, which it tends to do when you have durable capital and no major wars, the financial sector will grow relative to GDP.
But again, that's not sinister. Think of it in terms of wealth.
I see. So one way to think about it is like the management cost as a fraction of the assets under management or something.
That's right. In that case, 2% is not that bad.
Yeah. Okay.
Interesting. I want to go back to the risk aversion thing again, because I don't know how to think about this.
So, you know, his whole thing is these animal spirits, they guide us to make all these bets and engage in all this activity. In some sense, he's saying like, not only are we not risk neutral, but we're more risk seeking than is rational.
Whereas the way you'd conventionally think about it is that humans are risk averse, right? They prefer to take less risk than is rational in some sense. How do we score this? Well, here Milton Friedman, another goat contender, comes into the picture.
So his famous piece with Savage makes the point that risk aversion is essentially context dependent. So he was a behavioral economist before we knew of such things.
So the same people typically will buy insurance and gamble. Gambling, you can interpret quite broadly.
And that's the right way to think about it. So just flat out risk aversion or risk loving behavior, it doesn't really exist.
Almost everyone is context dependent. Now, why you choose the context you do, maybe it's some kind of exercise in mood management.
So you insure your house so you can sleep well at night, you fire insurance, but then you get a little bored. And to stimulate yourself, you're betting on these NBA games.
And yes, that's foolish, but it keeps you busy and it helps you follow analytics and you read about the games online. And maybe that's efficient mood management.
And that's the way to think about risk behavior. I don't bet, by the way.
I mean, you could say I bet with my career, but I don't bet on things. Well, what is the way and what's your version of the lottery chicken? What is the thing where you just, for the entertainment value or the distraction value, you take more risk than would seem rational? Well, like writing the book Goat, which is not with any known publisher.
It's just online. It's free.
It's published within GPT-4. It took me quite a while to write the book.
I'm not sure there's a huge downside, but it's risky in the sense of it's not what anyone else was doing. So that was a kind of risk.
I invested a lot of my writing time in something weird. And I've done things like that pretty frequently.
So that keeps me, you could say, excited. Or like starting MRU, you know, the online education videos and economics.
No pecuniary return to me at all. Indirectly, it costs me a lot of money.
That's a sort of risk. I feel it's paid off for me in a big way.

But on one hand, you can say, well, Tyler, what do you actually have from that? And the answer is nothing. Yeah.
Well, this actually raises the question I was going to ask about these go contenders in general and how you're judging them, where you're looking at their work as a whole. given that, I don't know, some of these risks pay off that these intellectuals take,

some of them don't pay off. should we just be looking at their top contributions

and just disregard everything else?

For Hayek, I think one of the points you have against him is that his top three articles

are amazing, but after that, there's a drop off.

The top risks you take are the only ones that matter?

Why are we looking at the other stuff?

I don't think they're the only ones that matter, but I'll weight them pretty heavily.

But your failures do reflect, usually, in how you think or what you know about the world. So Hayek's failures, for instance, his inability to come up with the normative standard in Constitution of Liberty, it shows in some ways he just wasn't rigorous enough.
He was content with the kind of Germanic, put a lot of complex ideas out there and hope they're profound. And you see that even in his best work.
Now that is profound, but it's not as if the failures and the best work for all these people are unrelated. And same with Keynes.
Like Keynes more or less changed his mind every year. That's a strength, but it's also a weakness.
And by considering Keynes' really good works and bad works, like his defenses of tariffs, you see that. And the best work he also moved on from in some way.
If you read How to Pay for the War in 1940, if you didn't know better, you would think it's someone criticizing the general theory. How does quantity have a quality all of its own when you think of great intellectuals, where many of these people have like volumes and volumes of work? Was that necessary for them to get the greatest hits? Or is the rest of it just a distraction from the things that really stand the test of time? For the best people, it's necessary.
So John Stuart Mill wrote an enormous amount. Most of it's quite interesting.

But his ability to see things from multiple perspectives, I think, was in part stemming

from the fact that he wrote a lot about many different topics. Mount.
Most of it's quite interesting. But his ability to see things from multiple perspectives,

I think, was in part stemming from the fact that he wrote a lot about many different topics, like French history, ancient Greece. He had real depth and breadth.
If Keynes is alive today, what are the odds that he's in a polycule in Berkeley, writing the best written, less wrong post you've ever seen? I'm not sure what the counterfactual means. So Keynes is so British, maybe he's an effective altruist at Cambridge.
And given how he seems to have run a sex life, I don't think he needed a polycule. Like a polycule is almost a Williamsonian device to economize on transactions costs.
But Keynes, according to his own notes, seems to have done things on a very casual basis. He had a spreadsheet, right, of a special partner? He had a spreadsheet.
And from context, it appears he met these people very casually and didn't need to be embedded in, oh, we're the five people who get together regularly. So that's not a hypothetical.
We think we saw what he did. And I think he'd be at Cambridge, right? That's where he was.
Why should he not today be at Cambridge? How did a person, how did a gay intellectual get that amount of influence in the Britain of that time? When you think of somebody like Alan Turing, you know, helps Britain win World War II and is castrated because of, you know, one illicit encounter that is caught? Was it just not public? How did he get away with it, basically? I don't think it was a secret about Keynes. He had interacted with enough people that I think it was broadly known.
He was politically very powerful. He was astute.
As someone managing his career, he was one of the most effective people, you could say, of all time, not just amongst economists. And I've never seen evidence that Keynes was in any kind of danger.
Turing also may have intersected with national security concerns in a different way. I'm not sure we know the full Alan Turing story and why it went as badly as it did.
but there was in the past, very selectively, and I do mean very selectively, more tolerance of deviance than people today sometimes realize. Oh, interesting.
And Keynes benefited from that. But again, I would stress the word selectively.
Say more? What determines who is selected for this tolerance? I don't feel I understand that very well. But there's plenty, say, in Europe and Britain of the early 20th century where, quote unquote, outrageous things were done.
And it's hard to find evidence that people were punished for it. Now, what accounts for the difference between them and the people who were punished? I would like to see a very good book on that.
Yeah, I guess it's similar to our time, right? We have certain taboos and you can get

away with them. Some people get away with them completely.
They say whatever on Twitter and

other people get canceled. Actually, how have you gotten away? I feel like you've never been in,

at least as far as I know of, I haven't heard you being the part of any single controversy,

but you have some opinions out there. I feel people have been very nice to me.

Yeah. What'd you do? How did you become the canes of our time?

We're comparing- Well, maybe I'm a statesman after all, right?

I think just being good-natured helps and helping a lot of people helps. And Turing, I'm a huge fan of, wrote a paper on him with Michelle Dawson, but it's not obvious that he was a very good diplomat.
And it seems he very likely was a pretty terrible diplomat. And that might be feeding into this difference.
How do you think about the long-term value of and the long-term impact of intellectuals you disagree with? So do you think over the course of history, basically the improvements they make to the discourse and the additional things they give us a chance to think about, that washes out their object level, the things that were object level wrong about? Well, it's worked that way so far, right? So we've had economic growth, obviously with interruptions, but so much has fed into the stream and you have to be pretty happy with today's world compared with, say, 1880. The future may or may not bring us the same, but if the future brings us continuing economic growth, then I'm going to say exactly that.
Oh, be happy they fed into the stream. They may have been wrong, but things really worked out.
But if the future brings us a shrinking population asymptotically approaching a very low level and greater poverty and more war, then you've got to wonder, well, who is responsible for that, right? Who would be responsible for that? We don't know, but I think secular thinkers will fall in relative status if that's the outcome. And that's most prominent intellectuals today, myself included.
Yeah. Who would rise in status as a result? Well, there's a number of people complaining strenuously about fertility declines.
If there's more war, probably the hawks will rise in status, whether or not they should. An alternative scenario is that the pacifists rise in status.
But I basically never see the pacifists rising in status for any more than brief moments. Like after the Vietnam War, maybe they did.
After World War I? Yes. But again, that didn't last because World War II swept all that away.
Right. So the pacifists seem to lose long-term status no matter what.
And that means the hawks would gain in status and those worried about fertility. And whatever technology drives the new wars, if that is what happens, let's say it's drones.
It's possible, right? People who weren't against drones, which is not currently that big a thing. There are quite a few such people.
But there's no one out there known for worrying about drones the way, say, Eliezer is known for worrying about AI. Now, drones, in a way, are AI, but it's different.
Yeah. Although Matt Friedman, Stuart Armstrong, other people have talked about we're not that far away from drones.
I guess you have also talked about this. They can assassinate AI, exactly.
And there's that famous YouTube video with millions of views. Whoever made that would rise to the stars.
I think Stuart Armstrong was... No, sorry.
Not Stuart Armstrong. Anyways, yeah.
But those people could end up as much more important than they are now. Yeah.
Okay. Let's talk about Hayek.
Sure. So before we get into his actual views, I think his career is a tremendous white pill in the sense that he writes The Road to Serfdom in 1944 when Nazi Germany and Soviet Union are both, you know, prominent players.
And honestly, like the way things shaked out, I mean, he would be pretty pleased that like a lot of the biggest collectivisms of the day have been wiped out. So it is a tremendous white bill.
You can have a career like that. He was not as right as he thought at the time, but he ended up being too grumpy in his later years.
Oh, really? He thought, well, collectivism is still going to engulf the world. And I think he became a grumpy old man.
And maybe it's one thing to be a grumpy old man in 2024, but to be a grumpy old man in the eighties didn't seem justified. What was the cause? What specifically did he see that he...
He thought there were atavistic instincts in the human spirit, which were biologically built in, that led us to be collectivists and too envious and not appreciative of how impersonal orders worked and that this would cause the West to turn into something quite crummy. I wouldn't say he's been proven wrong, but a lot of the West has had a pretty good run since then.
And there's not major evidence that he's correct. The bad events we've seen, like some war coming back, something weird happening in our politics, I'm not sure how to describe it.
I'm not sure they fit the Hayek model, sort of simply the accretion of more socialism. But in terms of the basic psychological urges towards envy and resentment, doesn't the rise of wokeness provide evidence for his view? But now wokeness, I would say, is peaked and is falling.
That's a big debate. I don't see wokeness as our biggest problem.
I see excessive bureaucracy, sclerotic institutions, kludgeocracy as bigger problems. They're not unrelated to wokeness, to be clear, but I think they're more fundamental and harder to fix.
Let's talk about Hayek's arguments. So obviously, he has a famous argument about decentralization.
But when we look at companies like Amazon, Uber, these other big tech companies, they actually do a pretty good job of central planning, right? There's like a sea of logistics and drivers and trade-offs that they have to square. Do they provide evidence that central planning can work? Well, I'm not a Kosian.
So Kos, in his famous 1937 article, said the firm is planning, and he contrasted that to the market. I think the firm is the market.
The firm is always making contracts in the market, is subject to market checks and balances. To me, it's not an island of central planning in the broader froth of the market.
So I'm just not Kosian. So people are Kosian.
This is an embarrassing question for them. But I'll just say Amazon being great is the market working and they're not centrally planning.
Even the Soviet Union, it was very bad, but it didn't end up being central planning. It started off that way for a few years.
So I think people misinterpret large business firms in many ways, on both the left and the right. Wait, but under this argument, it still adds to the credence of the people who argue that basically we need the government to control.
Because if it is the case that Soviet Union is still not central planning, people would say, well, but yeah, but that's kind of what I want in terms of there's still kind of checks in terms of import exports of the market test is still applied to the government in that sense. What's wrong with that argument that basically you can treat the government as that kind of firm? I'm not sure I followed your question.
I would say this. I view the later Soviet Union as being highly decentralized, managers optimizing their own rents and setting prices too low to take in bribes, a la Paul Craig Roberts, what he wrote in the 80s.
And that's a very bad decentralized system. And it was sort of backed up by something highly central, the Communist Party in the USSR.
But it's not like the early attempts at true central planning in the Soviet Union in the 20s right after the revolution, which did totally fail and were abandoned pretty quickly, even by Lenin. Would you count the 50s period in the Soviet Union as more centrally planned or more decentralized by that point? Decentralized.
You have central plans for a number of things, obviously weaponry, steel production. You have targets.
But even that tends to collapse into decentralized action just with bad incentives.

So your explanation for why did the Soviet Union have high growth in the 50s, is it more catch up?

Is it more that they weren't communists at the time?

How would you explain that?

A lot of the Soviet high growth was rebuilding after the war, which central planning can

do relatively well, right?

You see government rebuilding cities, say in Germany, that works pretty well. But most of all, and this is even before World War II, just urbanization.
It shouldn't be underrated today, given we've observed China, but so much of Chinese growth was driven by urbanization, so much of Soviet growth. You take someone working on a farm, producing almost nothing, put them in a city, even under a bad system, they're going to be a lot more productive.
And that drove so much of Soviet growth before, after the war. But that at some point more or less ends as it has, well, it hasn't quite ended with China, but it's certainly slowed down.
And people don't pay enough attention to that. I don't know why.
It now seems pretty obvious. But going back to the point about firms.
So I guess the point I was trying to make is I don't understand why the argument you make that, well, these firms are still within the market in the sense that they have to pass these market tests. Why that couldn't also apply to government-directed production? Because then people argue that- Well, sometimes it does, right? Government runs a bunch of enterprises.
They may have monopoly positions, but many are open to the market.

In Singapore, government hospitals compete with private hospitals.

Government hospitals seem to be fine.

I know they get some means of support, but they're not all terrible.

But I guess as a general principle, you'd be against more government-directed production, right?

Well, it depends on the context.

So if it's, say, the military, probably we ought to be building a lot more of some particular things. And it will be done through Boeing, Lockheed, and so on.
But the government's directing it, paying for it, in some way, quote-unquote, planning it. And we need to do that.
We've, at times, done that well in the past. So people overrate the distinction between government and market, I think, especially libertarians.
But that said, there's an awful lot of government bureaucracy that's terrible, doesn't have a big market check. But very often, governments act through markets and have to contract or hire consultants or hire outside parties.
And it's more like a market than you think. I want to ask you about another part of Hayek.
So he has an argument about how it's really hard to aggregate information towards a central planner. But then more recently, there have been results in computer science that just finding the general equilibrium is computationally intractable.
And which raises the question, well, the market is somehow solving this problem, right? Separate from the problem of getting the information, making use of the information to allocate scarce resources. How is that computationally a process that's possible? I'm sure you're aware, like linear optimization, non-convex constraints.
How does the market solve this problem? Well, the market's not solving for a general equilibrium. It's just solving for something that gets us into the next day.
And that's a big part of the triumph. Just living to fight another day, wealth not going down, not everyone quitting.
And if you can do that, things will get better. And that's what we're pretty good at doing, is just building a sustainable structure.
And a lot of it isn't sustainable, like the fools who start these new small businesses. But they do pretty quickly disappear.
And that's part of the market as well. So if you view the whole thing in terms of computing a general equilibrium, I think one of Hayek's great insights is that's just the wrong way to think about the whole problem.
So lack of computational ability to do that doesn't worry me for either the market or planning. Because to the extent planning does work, it doesn't work by succeeding at that.

Like Singaporean public hospitals don't work because they solve some computational problem. They seem to work because the people running them care about doing a good job and enough of the workers go along with that.
Yeah. So related to that, I think in the meaning of competition, he makes the point that the most interesting part of markets is when they go from one equilibrium to another, because that's where they're trying to figure out what to produce and how to produce a better and so on, and not the equilibriums themselves.
And it seemed related to the Peter Thiel point in zero to one that monopoly is when you have interesting things happen, because when there's this competitive equilibrium, there's no profits to invest in R&D or to do cool new things. Do those seem like related points? Am I reading? Absolutely.
And Hayek's essay competition as a discovery process or procedure makes that point very explicitly. And that's one of his handful of greatest essays, one of the greatest essays in all of economics.
Is there a contradiction in Hayek in the sense that the decentralization he's calling for results in specialists having to use the very scientism and statistical aggregates? Of course. Hayek underrates scientism.
Scientism's great. It can be abused, but we all rely on scientism.
If you have an mRNA vaccine in your arm, well, how do you feel about scientism and so on? How much should we worry about this opening up the whole system to fragilities if there's like no one mind that understands large parts of how everything fits together? People talk about this in the context of if there's a war in China and the producers didn't think about that possibility when they put valuable manufacturing in Taiwan and stuff like that. No one mind understanding things is inevitable under all systems.
This gets into some of the alignment debates. If you had one mind that understood everything or could control everything, you have to worry a great deal about the corruptibility of that mind.
So legibility, transparency are not per se good. You want enough of them in the right places, but you need some kind of balance.
So I think supply chains are no longer an underanalyzed problem.

But until COVID, they were.

And they're a big deal.

And the Hayekian argument doesn't always work because the signal you have is of the current price.

And that's not telling you how high are the inframarginal values if you get, say, cut

off from being able to buy vaccines from India because you're at the bottom of the queue. So that was a problem.
It was the market failing because the price doesn't tell you inframarginal values. And when you move from some ability to buy the output to zero, those inframarginal values really matter.
What would Hayek make of AI agents? As they get more powerful, you have some market between the AI agents. There's some sort of decentralized order as a result.
What insights would you have about that? Well, a lot of Hayekians wrote about these issues, including at George Mason, in the 1980s. And I think some of those people even talked to Hayek about this.
And my recollection, which is imperfect, is that he found all this very interesting and in the spirit of his work. And Don Lavoie was leading this research paragram.
He died prematurely of cancer. Bill Tullo was also involved.
And some of this has been written up. And it is very Hayekian.
And George Mason actually was a pioneer in this area. Well, what do you make of the agents, the market between them and the sort of infrastructure and order that you need to facilitate that? They're going to replicate markets on their own, has been my prediction.
And I think they're going to evolve their own currencies. Maybe at first they'll use Bitcoin, but property rights will be based at least at first on what we now call NFTs.
I'm not sure that will end up being the right name for them. But if you want property rights in a so-called imaginary world, that's where you would start with Bitcoin and NFTs.
So I don't know what percent of GDP this will be. At first, it will be quite small, but it will grow over time.
And it's going to show Hayek to have been right about how these decentralized systems evolve. Do you anticipate that it'll be sort of a completely different sphere and that there's like the AI agents economy and there's a human economy and they obviously have links between them, but it's not intermixed, like they're not on the same social media or the same task rabbit or whatever.
It's a very separate infrastructure that's needed for the AI agents to talk to themselves versus talk to humans. I don't see why we would enforce segregation.
Now, you might have some segregated outlets, like maybe X, Twitter. Well, we'll keep off the bots.
Let's say it can even manage to do that. But if I want to hire a proofreader, I'm going to deal with the AI sector and pay them in Bitcoin.
And I'll just say to my personal AI assistant, hey, go out and hire an AI and pay them with whatever, and then just not think about it anymore. And it will happen.
Maybe because there's much higher transaction costs with dealing with humans and interacting with the human world, whereas they can just send a bunch of vectors to each other. It's much faster for them to just have a separate dedicated infrastructure for that.
And but transactions costs for dealing with humans will fall because you'll deal with their quote-unquote assistance, right? So you'll only deal with the difficult human when you need to. And people who are very effective will segregate their tasks in a way that reflects their comparative advantage.
And people who are not effective will be very poor at that. And that will lead to some kind of bifurcation of personal productivity.
Like how well will you know what to delegate to your AI? I'll predict you'll be very good at it. You may not have figured it out yet, but say you're like AA plus on it and other people are D, that's a big comparative advantage for you.
So we're talking, I guess, about like GPT-5 level models.

What do you think in your mind about like, okay, this is GPT-5, what happens with GPT-6,

GPT-7?

Do you see it?

Do you still think in the frame of having a bunch of RAs or does it seem like a different

sort of thing at some point?

I'm not sure what those numbers going up mean or what a GPT-7 would look, or how much smarter it could get. I think people make too many assumptions there.
It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to GPT, say 5.5, I'm not sure you can just turn up the dial on smarts and have it like integrate general relativity and quantum mechanics.
Why not? I don't think that's how intelligence works. And this is a Hayekian point.
And some of these problems, there just may be no answer. Like maybe the universe isn't that legible.
And if it's not that legible, the GPT-11 doesn't really make sense as a creature or whatever. Isn't there a Hayekian argument to be made that, listen, you can have billions of copies of these things? Imagine the sort of decentralized order that could result, the amount of decentralized tacit knowledge that billions of copies talking to each other could have.
That in and of itself is an argument to be made about the whole thing as an emergent order will be much more powerful than we were anticipating. Well, I think it will be highly productive.
What tacit knowledge means with AIs, I don't think we understand yet. Is it by definition all non-tacit? Or does the fact that how GPT-4 works is not legible to us or even its creators so much, does that mean it's possessing of tacit knowledge? Or is it not knowledge? None of those categories are well thought out, in my opinion.
So we need to restructure our whole discourse about tacit knowledge in some new, different way. But I agree, these networks of AIs, even before, like GPT-11, they're going to be super productive.
But they're still going to face bottlenecks, right? And I don't know how good they'll be at, say, overcoming the behavioral bottlenecks of actual human beings, the bottlenecks of the law and regulation. And we're going to have more regulation as we have more AIs, right? Yeah.
When you say there'll be uncertainties, I think you made this argument when you were responding to Alex Sefstein on Fossil Future, where you said uncertainties also extend out into the domain where there's a bad outcome or a much bigger outcome than you're anticipating. That's right.
So can we apply the same argument to AI? Like the fact that there is uncertainty is also a reason for worry. Well, it's always reason for worry, but there's uncertainty about a lot of things, and AI will help us with those other uncertainties.
So on net, do you think more intelligence is likely to be good or bad, including against X-risk? And I think it's more likely to be good. So if it were the only risk, I'd be more worried about it than if there's a whole multitude of risks.
But clearly, there's a whole multitude of risks. But since people grew up in pretty stable times, they tend not to see that in emotionally vivid terms.
And then this one monster comes along, and they're all terrified. What would Hayek think of prediction markets? Well, there were prediction markets in Hayek's time.
I don't know that he wrote about them, but I strongly suspect he would see them as markets that, through prices, communicate information. But even around the time of the Civil War, there were so-called bucket shops in the US and New York where you would bet on things.
They were betting markets with cash settlement, probably never called prediction markets, but they were exactly that. Later on, they were banned, but it's an outstanding thing.
There were betting markets on lives in 17th century Britain, different attempts to outlaw them, which I think basically ended up succeeding. But under the table, I'm sure it still went on to some extent.
Yeah. The reason it's interesting to think about this is because his whole argument about the price system is that you can have a single dial that aggregates so much information, but it's precisely for this.
And for that reason, it's so useful to somebody who's trying to decide based on that information. But it's precisely for this reason that it's so aggregated that it's hard to learn about any one particular input to that dial.
But I would stress it's not a single dial. And whether Hayek thought it was a single dial, I think you can argue that either way.
So people in markets, they also observe quantities. They observe reaction speeds.
There's a lot of dimensions to prices other than just, oh, this newspaper cost $4. The terms on which it's advertised.
So markets work so well because people are solving this complex multidimensional problem. And the price really is not a sufficient statistic the way it is in Aragon de Brou.
And I think at times Hayek understood that. And at other times he writes as if he doesn't understand it, but it's an important point.
Somewhat related question. What does it tell us about the difficulty of preserving good institutions, good people, that the median age of a corporation is 18 years? And they don't get better over time, right? Decade after decade, there's very few corporations that continue improving in that way.
Well, I think some firms keep improving for a long time. So there are Japanese firms that date back to the 17th century, right? They must be better today or even in 1970 than they were way back when.
Like the leading four or five Danish firms, none of them are younger than the 1920s. So Maersk, you know, the firm that came up with Ozempic, the pharmaceutical firm, they must be much better than they were back then, right? They have to be.
So how that is possible to me is a puzzle, but I think in plenty of cases, it's true. I can really say that the best firms in the world aren't ones that have been improving over time.
If you look at the biggest companies in our market cap, it's not like this is what it takes to get there is hundreds of years of continual refinement. What does that tell us about the world? Well, not hundreds of years, but again, don't be overly biased by the US experience and the tech sector.
There's around the world plenty of firms that at least seem to get better as they get older. Certainly, their market cap goes up.
Some of that might just be a population effect. Maybe their productivity per some unit is in some ways going down.
But that's a very common case. And why the U.S.
is such an outlier is an interesting question, right? Israel clearly is an outlier. In a sense, they only have pretty young firms, right? And they've done very well in terms of growth.
Can it be explained by the fact that in these other countries, it's actually just harder to start a new company? Not necessarily the older companies are actually getting better? Possibly, but it does seem the older companies are often getting better, like in Denmark. China is pretty much entirely new firms because of communism.
Japan in particular seems to have a lot of very old firms. I don't know if they're getting better, but I don't think you can write that off as a possibility.
This is Hayek in competition as a discovery process. And it seems like he predicted nimbyism.
So he says, in a democratic society, it would be completely impossible using commands that could not be regarded as just to bring about those changes that are undoubtedly necessary, but the necessity of which could not be strictly demonstrated in a particular case. So it seems like he's kind of talking about what we today call nimbyism.
Oh, sure. And there's plenty of nimbyism in earlier times.
I mean, you look at the 19th century debates over restructuring Paris, Haussmann and putting in the broader boulevards and the like, that met with very strong opposition. It's a kind of miracle that it happened.
Yeah. Is this a thing that's inherent to the democratic system? Recently, I interviewed Dominic Cummings and obviously planning is a big issue in the UK.
It seems like every democratic country has this kind of problem. And most autocratic countries have it too.
Now, China is an exception. They will probably slide into some kind of an imbyism, even if they stay autocratic.
Just people resist change. Interest groups always matter.
Public opinion a la David Hume always matters. And it's easy to not do anything on a given day, right? And that just keeps on sliding into the future.
So, but I guess... India has had a lot of nimbyism.
It's fallen away greatly under Modi and especially what the state governments have done. But it can be very hard to build things in India still.
Although it is a democracy. I guess the China example, we'll see what happens there.
That's right. But it would be very surprising because the Chinese government is highly responsive to public opinion on most, but not issues.
So why wouldn't they become more nimby, especially with the shrinking population? They're way overbilled, right? So the pressure to build will be weak. And in cases where they ought to build, I would think quite soon they won't.
How much of economics is a study of the systems that human beings use to allocate scarce resources and how much is just something you'd expect to be true of aliens, AIs? It's interesting when you read the history of economic thought, how often they make mention of human nature specifically, like Keynes is talking about. People have high discount rates, right? Yeah, what are your thoughts here? My former colleague, Gordon Tulloch, wrote a very interesting book on the economics of ant societies and animal societies.
And very often they obey human-like principles, or more accurately, humans obey non-human animal-like principles. So I suspect it's fairly universal and depends less on, quote unquote, human nature than we sometimes like to

suggest. Maybe that is a bit of a knock on some behavioral economics.
The logic of the system,

Armin Elshian wrote on this, Gary Becker wrote on this. There was some debates on this in the early

1960s, and that the automatic principles of profit and loss and selection at a firm-wide level

really matters and is responsible for a lot of economics being true. I think that's correct.
Actually, that raises an interesting question. Within firms, the sort of input they're getting from the outside world, the ground truth data is profit, loss, bankruptcy.
It's like very condensed information. And from this, they had to make the determination of who to fire, who to hire, who to promote, what projects to pursue.
How do you make sense of how firms disaggregate this very condensed information? I would like to see a very good estimate of how much your productivity gains is just from selection and how much is from, well, smart humans figure out better ways of doing things. And there's some related pieces on this in the international trade literature.
So when you have freer trade, a shockingly high percentage of the productivity gains come from your worst firms being bankrupted by the free trade. And Alex Tabarrok has some MR posts on this.
I don't recall the exact numbers, but it was higher than almost anyone thought. And that to me suggests the Alci and Becker mechanisms of evolution at the level of the firm, enterprise, or even sector.
They're just a lot more important than human ingenuity. And that's a pretty Hayekian point.
Hayek presumably read those pieces in the 60s. I don't think he ever commented on them.
Interesting. Let's talk about Mill.
John Stuart Mill, not James Mill, but he was interesting too. So his arguments about the law force against women and how basically throughout history, the state of women in his society is not natural or the wisdom of the ages, but just the result of the fact that men are stronger and have codified that.
Can we apply that argument in today's society against children and the way we treat them? Yes, I think we should treat children much better. We've made quite a few steps in that direction.
It's interesting to think of Mill's argument as it relates to Hayek. So Mill is arguing you can see more than just the local information.
So keep in mind when Mill wrote, every society that he knew of, at least, treated women very poorly, oppressed women, women because they were physically weaker or at a big disadvantage. If you think there's some matrilineal exceptions, Mill didn't know about them.
So it appeared universal. And Mill's chief argument is to say, you're making a big mistake if you overly aggregate information from this one observation, that behind it is a lot of structure, and a lot of the structure is contingent, and that if I, Mill, unpack the contingency for you, you will see behind the signals.
So Mill is much more rationalist than Hayek. It's one reason why Hayek hated Mill, but clearly on the issue of women, Mill was completely correct, that women can do much better, will do much better.
It's not clear what the end of this process will be. It will just continue for a long time.
Women achieving in excellent ways. And it's Mill's greatest work, I think.
It's one of the greatest pieces of social science. And it is anti-Hiaikian.
It's anti-small-c conservatism.

His other book on liberty is very Hayekian though, right? In the sense that the free speech is needed because information is contained in many different people's minds. That's right.
And I think Mill integrated, sort of you could call it Hayek and anti-Hayek, better than Hayek ever did. That's why I think Mill is the greater thinker of the two.
But on the topic of children, what would Mill say specifically?

I guess you could have talked about it if you wanted to, but I don't know if you was in today's world, we send them to school, they're there for eight hours a day, most of the time it's probably wasted. And we just like use a lot of coercion on them that we don't need to.
How would he think about this issue? There's Mill's own upbringing, which was quite strict and by current standards oppressive, but apparently extremely effective

in making Mill smart. So I think Mill very much thought that kids should be induced to learn the

classics, but he also stressed they needed free play of the imagination in a way that he drew

from German and also British romanticism. And he wanted some kind of synthesis of the two.
But by current standards, Mill, I think, still would be seen as a meanie toward kids, but he was progressive by the standards of his own day. Do you buy the arguments about aristocratic tutoring for people like Mill? And there's many other cases like this, but since they were kids, they were taught by one-on-one tutors and that explains part of their greatness.
I believe in one-on-one tutors, but I don't know how much of those examples is selection, right? So I'm not sure how important it is. But just as a matter of fact, if I were a wealthy person and just had a new kid, I would absolutely invest in one-on-one tutors.
You talk in the book about how Mill is very concerned about the quality and the character development of the population. But when we think about the fact that somebody like him was elected to the parliament at the time, the greatest thinker who's alive is elected to government.
And it's hard to imagine that could be true with today's world. Does he have a point with regards to the quality of the population? Well, Mill, as with women, he thought a lot of improvement was possible.
And we shouldn't overly generalize from seeing all the dunces around us, so to speak. Maybe the book is still out on that one.
But it's an encouraging belief, and I think it's more right than wrong. There's been a lot of moral progress since Mill's time, not in everything, but certainly how people treat children or how men treat their wives.
And even when you see negative reversals, Steven Pinker so far seems to be right on that one. But you do see places like

Iran, how women were treated seems to have been much better in the 1970s than it is today. So

there are definitely reversals. But on a specific reversal of somebody of male's quality probably wouldn't get elected to Congress in the US or Parliament of the UK, how big a deal is that? But male's advice may get through the cracks due to all the local statesmen who wisely advise their representatives in the House, right? So I don't know how much that process is better or worse compared to, say, the 1960s.
I know plenty of smart people who think it's worse. I'm not convinced that's true.
Let's talk about Smith, Adam Smith yeah okay

one of the things

I feel like

one of the things I find really remarkable about him is he publishes in 1776, The Wealth of Nations. And basically around that time, Gibbon publishes The Decline and Fall of the Roman Empire.
Yep. So he publishes The Decline and Fall of the Roman Empire.
And one of his lines in there is, if you were asked to state a period of time when man's condition was, what is his best? It was during the reign of Commodus to Domitian. And that's like 2,000 years before that, right? So there's basically been, at least it's plausible to somebody really smart that there's basically been no growth for 2,000 years.
And in that context, to be making the case are markets and mechanization and division of labor. I think it's even more impressive when you put it in the context that he has basically been seeing 0.5% or less growth.
I strongly agree. And this is, in a way, Smith being like Mill.
Smith is seeing the local information, a very small growth, and the world barely being better than the Roman Empire, and inferring from that with increasing returns, division of labor, how much is possible. So Smith is a bit more of a rationalist than Hayek makes him out to be.
Right. Now, I wonder if we use the same sort of extrapolative thinking that Smith uses.
We haven't seen that much growth yet, but if you apply these sorts of principles, this is what you would expect to see. What would he make of the potential AI economy where we see 2% growth a year now, but you have billions of potential more agents or something? Would he say, well, actually, you might have 10% growth because of this? You would need more economic principles to explain this? Or just adding that to our list of existing principles would imply big gains? It's hard to say what Smith would predict for AI.
My suspicion is that the notion of 10% growth

was simply not conceivable to him. So he wouldn't have predicted it because he never saw anything like it.
That to him, 3% growth would be a bit like 10% growth. It would just shock him and bowl him over.
But Smith does also emphasize different human bottlenecks and constraints of the law. So it's quite possible Smith would see those bottlenecks as mattering and checking AI growth and its speed.
But as a principle, given the change we saw pre-Industral Revolution and after 1870, does to you it seem plausible that you could go from the current regime to a regime where you have 10% growth for decades on end? That does not seem plausible to me. But I would stress the point that high rates of growth decades on end, the numbers cease to have meaning because the numbers make the most sense when the economy is broadly similar.
Like, oh, everyone eats apples and each year there's 10% more apples at a roughly constant price.

As the basket changes, the numbers become meaningless.

It's not to deny there's a lot of growth, but you can think about it better by discarding the number.

And presumably AI will change the composition of various bundles quite a bit over time.

So when you hear these estimates about what the GDP per capita was in the Roman Empire, do you just disregard that and think in terms of qualitative changes from that time? Depends what they're being compared to. So there's pieces in economic history that are looking at, say, 17th, 18th century Europe, comparing it to the Roman Empire.
Most of GDP is agriculture, which is pretty comparable, right? Especially in Europe. It's not wheat versus corn.
It's wheat and wheat. And I've seen estimates that, oh, say by 1730, some parts of Western Europe are clearly better off than the Roman Empire at its peak, but like within range.
Those are the best estimates I know. And I trust those.
They're not perfect, but I don't think there's an index number problem so much. And so when people say we're 50% richer than an average Roman at the peak of the empire, this kind of thinking doesn't make sense to you.
It doesn't make sense to me. And a simple way to show that, let's say you could buy from a Sears robot catalog of today or from 1905 and you have $50,000 to spend.
Which catalog would you rather buy from? You have to think about it, right? Now, if you just look at changes in the CPI, it should be obvious you would prefer the catalog from 1905. Everything's so much cheaper.
That white shirt costs almost nothing. Right.
At the same time, you don't want that stuff. It's not mostly part of the modern bundle.
So even if you ended up preferring the earlier catalog, the fact that you have to think about it reflects the ambiguities. When you read the contemporaries of Smith, other economists who were writing at the time, were his arguments just clearly, given the evidence of the time, much better than everybody around? Or was it just that ex-post he was clearly right, but given the arguments at the time, it could have gone any

one of different ways? Well, there aren't that many economists at the time of Smith,

so it depends what you're counting. I mean, the two fellow Scots you could compare Smith to are

Sir James Stewart, who published a major work, I think, in 1767. On some matters, Stewart was ahead

of Smith, not most. Clearly, Smith was far greater, but Stewart was no slouch.
And the other point of

Thank you. in 1767.
On some matters, Stewart was ahead of Smith, not most. Clearly, Smith was far greater, but Stewart was no slouch.
And the other point of comparison is David Hume, Smith's best friend, of course. Per page, you could argue Hume was better than Smith.
Certainly on monetary theory, Hume was better than Smith. Now, he's not a goat contender.
He just didn't do enough. But I wouldn't say Smith was ahead of Hume.

He had more and more important insights.

But Hume was pretty impressive.

Now, if you're talking about, oh, like the 18th century German camera lists, well, they were bad mercantilists.

But there's people, say, writing in Sweden in the 1760s, analyzing exchange rates, who

had better understandings of exchange

rates than Smith ever did. So it's not that he just dominated everyone.

Let me offer some other potential nominees that were not in the book for Goat, and I want your

opinions of them. Henry George, in terms of explaining how land is fundamentally different

from labor and capital when we're thinking about the economy.

Well, first, I'm not sure land is that fundamentally different from labor and capital.

A lot of the value of land is fundamentally different from labor and capital when we're thinking about the economy? Well, first, I'm not sure land is that fundamentally different from labor and capital. A lot of the value of land comes from improvements.
And what's an improvement can be quite subtle. It doesn't just have to be, you know, putting a plow to the land.
So I would put George in the top 25, very important thinker, but he's a bit of a, not a-note Johnny. His book on protectionism is still one of the best books on free trade, but he's circumscribed in a way, say Smith and Mill were not.
Today, does it satisfy us? We see rents in big cities. His status is way up for this reason, because of Yimbi Nimbiy, and I think that was undervalued he's worth reading very carefully a few years ago we're recording here at mercatus we had a like 12 person two-day session with peter thiel just on reading henry george it's all we did and people came away uh very impressed i think and if people are interested they might enjoy the episode I did with Lars Doucette.

Oh, I don't know about this. He's a Georgist?

Oh, yeah. He's a really smart guy.
He wrote a book review of Henry George that won Scott Alexander's book review contest.

Oh, I know what this is now.

And then he just turned it into a whole book of his own, which is actually really good.

And I think there's something truly humane in George when you read him that can be a bit infectious, that's positive. And there was some insane turnout for his funeral, right? He was very popular at the time.
That's right. Yeah.
And that was deserved. Yeah.
I guess you already answered this question, but Ronald Coase, in terms of helping us think about firms and property rights and transaction costs. Well, even though I think the 1937 piece is wrong, it did create one of the most important genres.
He gets a lot of credit for that. He gets a lot of credit for the Coase theorem.
The FCC property rights piece is superb. The Lighthouse piece is very good.
Again, he's in the top 25. But in terms of his quantity, its own quality, it's just not quite enough.
There's no macro. But of course, you rate him very, very highly.
How about your former advisor, Thomas Schelling? He is a top-tier Nobel laureate, but I don't think he's a serious contender for greatest economist of all time. He gets the most credit for making game theory intuitive, empirical, and workable, and that's worth a lot.
Economics of self-command, he was a pioneer, but in a way, that's just going back to the Greeks and Smith. He's not a serious contender for GOAT, but a top-tier Nobel laureate for sure.
You have a fun quote in the book on Arrow where you say, work was a Nobel Prize winning important, but not important, important. Well, some parts of it were important, important, like how to price securities.
So I think I underrated Arrow a bit in the book. If you ask, like, what regrets do I have about the book? I say very, very nice things about Arrow, but I think I should have pushed him even more.
What would Arrow say about prediction markets? Well, he was really the pioneer of theoretically understanding how they work. Yeah.
So he was around until quite recently. I'm sure he had things to say about prediction markets, probably positive.
So one of the points you make in the book is economics at the time was really a way of carrying forward big ideas about the world. What discipline today is where that happens?

Well, internet writing, it's not a discipline, but it's a sphere, and plenty of it happens more than ever before. But it's segregated from what counts as original theorizing in the academic sense of that word.
Is that a good or bad segregation? I'm not sure. But it's really a very sharp, radical break from how things had been.
And it's why I don't think there'll be a new goat contender, probably not ever. Or if there is, it will be something AI related.
Yeah, that sounds about right to me. But within the context of internet writing, obviously, there's many disciplines there, economics being a prominent one.
When you split it up, is there a discipline in terms of, I don't know, people writing in terms of computer science concepts or people writing in terms of economic concepts? Who is today, I think, having ideas? Somehow I feel the discipline cease to matter, that really good internet writing is multidisciplinary. When I meet someone like a Scott Aronson, who's doing like computer science, AI type internet writing on his blog.
I have way more in common with him than with a typical research economist, say at Boston University. And it's not because I know enough about computer science, like I may or may not know a certain amount, but it's because our two enterprises are so similar.
Or Scott Alexander, he writes about mental illness also. That just feels so similar.
And we really have to rethink what the disciplines are. It may be that method of writing is the key differentiator for this particular sphere.
Not for everything. Scott Aronson was my professor in college for a couple of semesters.
Oh, that's great. Yeah.
Yeah. That's where I decided I'm not going to go to grad school.
That's also great.

Because you just see like two standard deviations above you easily. You might as well choose a different game.

But his method of thinking and writing is infectious, like that of Scott Alexander and many of the rest of us.

Yeah.

So I think in the book you say you were raised as much by economic thought or the history of economic thought as you are by your graduate training.

More, much more.

It's not even close.

Today, people would say, I was talking to Basil Hopper, who's a young economist, and

he said he was raised on marginal revolution in the same way that you were raised on the

history of economic thought.

How do you, does this seem like a good trade?

Are you happy that people today are raised on Scott Alexander and marginal revolution? At the margin, I would like to see more people raised on marginal revolution. I don't just mean that in a selfish way.
But the internet writing mode of thinking, I would like to see more economists and research scientists raised on it. But the number may be higher than we think.
Like if I hadn't run Emergent Ventures, I wouldn't know about Basil per se. Maybe would not have met him.
And it's infectious. So it might always be a minority, but it will be the people most likely to have new ideas.
And it's a very powerful new mode of thought, which I'll call internet way of writing and thinking. And it's not sufficiently recognized as something like a new field or discipline, but that's what it is.
I wonder if you're doing enough of that when it comes to AI, where I think you have really interesting thoughts about GPT-5 level stuff, but somebody with your sort of polymathic understanding of different fields, if you just extrapolate out these trends, I feel like you might have a lot of interesting thoughts about when you think just in terms of what might be possible with something much further down the line. Well, I have a whole book with AI predictions, Average is Over, and I have about 30 Bloomberg columns and probably 30 or 40 marginal revolution posts.
I can just say I'll do more. But the idea at which ideas arrive at me is the binding constraint.
I'm not holding them back. Speaking of Basil, he had an interesting question.
Should society or government subsidize savings so that we're in effect having – it leads to basically a zero social discount rate? So people on average probably have – they're prioritizing their own lives. They have discount rates based on their own lives.
If we're long-term, should the government be subsidizing savings? I'll come close to saying yes. First, we tax savings right now, so we should stop taxing savings.
Absolutely. I think it's hard to come up with workable ways of subsidizing savings that don't give rich people a lot of free stuff in a way that's politically unacceptable and also unfair.
So I'm not sure we have a good way of subsidizing savings. But in principle, I would be for it if we could do it in the proper targeted manner.
Although you had a good argument against this in stubborn attachments, right? That over the long term, if economic growth is high enough, then the savings of the rich will just be dissipated to everybody below. Well, I'm not sure to whom it's dissipated.
It does get dissipated. The great fortunes of the past are mostly gone, but they may not go to people below.
And the idea of writing into a tax system subsidies on that scale, in essence, subsidies to wealth, not GDP, but wealth is, say, six to eight times GDP. I just think the practical problems are quite significant.
It's not an idea I'm pushing, but there may at margins be ways you can do it that, say, only benefit people who are poor, ways you can improve, like the workings of local credit unions through better either regulation, deregulation, that are a kind of de facto subsidy without having to subsidize all of the saved wealth. There's got to be a lot of ways you can do that, and we should look for that more.
Relatedly, I think a couple years ago, Paul Schlemming had an interesting paper that if you look from 1311 to now, interest rates have been declining. There's been hundreds of years of interest rate declines.
What is the big picture explanation of this trend? I'm not sure we have one. You may know Cowan's third law.
All propositions about real interest rates are wrong. But simply lower risk, better information, higher buffers of wealth would be what you'd call the intuitive economistic explanations.
There's probably something to them. But how much of that trend do they actually explain as like a percent of the variance? I don't know.
Let's talk about anarchy. Anarchy, yes.
You've been a person about this. I hadn't read the last time we talked, and they're really interesting.
So maybe you can restate your arguments as you answer this question. But how much of your arguments about how network industries lead to these cartel-like dynamics, how much of that can help explain what happened to social media, Web 2.0? I don't view that as such a cartel.
I think there's a cartel at one level, which is small but significant. This is maybe more true three, four years ago than today with Elon owning Twitter and other changes.
But if someone got kicked off social media platforms three, four years ago, they would tend to get kicked off all or most of them. It wasn't like a consciously collusive decision, but it's a bit like, oh, I know the guy who runs that platform and he's pretty smart.
And if he's worried, I should be worried. And that was very bad.
I don't think it was otherwise such a collusive equilibrium. Maybe some dimensions on hiring social people, software engineers, there was some collusion, not enough bidding, but it was mostly competing for attention.
So I think the real risk, protection agencies aside, of network-based collusion is through banking systems, where you have clearinghouses and payments networks. And to be part of it, the clearinghouse in the absence of legal constraint can indeed help everyone collude.
And if you don't go along with the collusion, you're kicked out of the payment system. That strikes me as a real issue.
Do your arguments against anarchy, do they apply at all to Web 3.0 crypto like stuff? Do I think it will evolve into collusion? I don't see why it would. I'm open to hearing the argument that it could, though.
What would that argument go like? Well, I guess we did see with crypto that you have in order to just have workable settlement, you need these centralized institutions. And from there, you can get kicked off those and the government is involved with those.
And you can maybe abstract the government away and say that they will need to collude in some sense in order to facilitate transactions. And the exchanges have ended up quite centralized, right? Yeah.
And that's an example of clearing houses and exchanges being the vulnerable node. But I don't know how much Web 3.0 is ever going to rely on that.
It seems you can create new crypto assets, more or less at will. There's the focality of getting them started.
But if there's a real problem with the pre-existing crypto assets, I would think you could overcome that. So I would expect something more like a to and fro, waves of centralization, decentralization, and natural checks embedded in the system.
That's my intuition, at least. Does your argument against anarchy prove too much in the sense that globally, different nations have anarchic relations with each other, and they can't enforce coercive monopoly on each other, but they can coordinate to punish bad actors in the way you would want protection agencies to do, right? Like we can sanction North Korea together or something.

I think that's a very good point and very good question, but I would rephrase my argument.

You could say it's my argument against anarchy, and it is an argument against anarchy,

but it's also an argument that says anarchy is everywhere. So within government, well,

the feds, the state governments, all the different layers of federalism, there's a kind of anarchy.

There's not quite a final layer of adjudication, the way you might think. We pretend there is.
I'm not sure how strong it is. Internationally, of course, how much gets enforced by hegemon, how much is spontaneous order.
Even the different parts of the federal government, they're in a kind of anarchy with respect to each other. So you need a fair degree of collusion for things to work.
And you ought to accept that, but maybe in a Straussian way where you don't trumpet it too loudly. But the point that anarchy itself will evolve enough collusion to enable it to persist, if it persists at all, is my central point.
My point is like, well, anarchy isn't that different. Now, given we've put a lot of social political capital into our current institutions, I don't see why you would press the anarchy button.
But if I'm North Korea and I can press the anarchy button for North Korea, I get that it might just evolve into Haiti, but I probably would press the anarchy button for North Korea if at least someone would come in and control the loose nukes. Yeah.
This is related to one of those classic arguments against the anarchy that under anarchy, anything is allowed, so the government is allowed. Therefore, we're in a state of anarchy in some sense.
In a funny way, that argument's correct. We would re-evolve something like government, and Haiti has done this, but in very bad ways where it's got gangs and killings.
It doesn't have to be that bad. There's medieval Iceland, medieval Ireland.
They had various forms of anarchy, clearly limited in their destructiveness by low population, ineffective weapons. But they had a kind of stability.
You can't just dismiss them. And you can debate how governmental were they, but the ambiguity of those debates is part of the point, that every system has a lot of anarchy, and anarchies have a fair degree of collusion if they survive.
Oh, actually, so I want to go back to much earlier in the conversation where you're saying, listen, it seems like intelligence is a net good. So just that being your heuristic, you should call forth the AI.
Well, not uncritically. You need more argument, but just as a starting point.
Yeah. It's like if more intelligence isn't going to help you, you have some really big problems anyway.
But I don't know if you still have the view that we have like an 800-year timeline for human civilization. But that sort of timeline implies that intelligence actually is going to be the – because the

reason we have an 800-year timeline presumably is like some product of intelligence, right?

My worry is that energy becomes too cheap and people at very low cost can destroy things

rather easily.

So say if a nuclear – if destroying a city with a nuclear weapon costs $50,000, what would the world look like? I'm just not sure. It might be more stable than we think, but I'm greatly worried and I could readily imagine it falling apart.
Yeah, but I guess the bigger point I'm making is that in this case, the reason the nuke got so cheap was because of intelligence. Now, that doesn't mean we should stop intelligence, but if that's like the end result of intelligence over hundreds of years, that doesn't seem like intelligence is always a net good.
Well, we're doing better than the other great apes, I would say, even though we face these really big risks. And in the meantime, we did incredible things.
So that's a gamble I would take. But I believe we should view it more self-consciously as a sort of gamble.
And it's too late to turn back. The fundamental choice was one of decentralization.
And that may have happened hundreds of millions or billions of years ago. And once you opt for decentralization, intelligence is going to have advantages.
And you're not going to be able to turn the clock back on it. So you're walking this tightrope, and by goodness, you'd better do a good job.
I mean, we should frame our broader history more like that. And it has implications for how you think about ex-risk.
Again, I think of the ex-risk people, a bit of them, it's like, well, I've been living in Berkeley a long time, and it's really not that different. My life's a bit better,

and we can't risk all of this. But that's not how you should view broader history.
I feel like you're an expert person. I mean, even they don't think we're 100% guaranteed to go out by 800 years or something.
No, I don't think we're guaranteed at all. It's up to us.
I just think the risk, not that everyone dies. I think that's quite low, but that we retreat to some kind of pretty chaotic form of like medieval Balkans existence with a much lower population.
That seems to me quite a high risk with or without AI. It's probably the default setting.
Given that you think that's the default setting, why is that not a big part of your, when you're thinking about how new technologies are coming about? Why not consciously think in terms of, is this getting us to the outcome where we avoid this sort of pre-industrial state that would result from the $50,000 nukes? Well, if you think the risk is cheap energy more than AI per se, admittedly, AI could speed the path to cheap energy. It seems very hard to control.
The strategy that's worked best so far is to have relatively benevolent nations become hegemons and establish dominance. So it does influence me.
I want the US, UK, some other subset of nations to establish dominance in AI. It may not work forever, but in a decentralized world, it sure beats the alternative.
So a lot of the AI types, they're too rationalist and they don't start with the premise that we chose a decentralized world a very, very long time ago, even way before humans. And I think you made an interesting point when you were talking about Keynes in the book where you said one of his faults was that he assumed that people like him would always be in charge.
That's right. And I do see that also in the alignment discourse.
Like alignment is, you know, if it's just handing over the government and just assuming the government does what you'd expect it to do. And I worry about this from my own point of view.
So even if you think U.S. is pretty benevolent today, which is a highly contested and mixed proposition, and I'm an American citizen, pretty patriotic, but I'm fully aware of the long history of my government in killing, enslaving, doing other terrible things to people.
And then you have to rethink that over a long period of time, maybe the worst time period that affects the final outcome, even if the average is pretty good. And then if power corrupts and if government even indirectly controls AI systems, so the US government could become worse because it's a leader in AI, right?

But again, I've got to still take that over China or Russia or wherever else it might be.

I just don't really understand when people talk about national security.

I've never seen the AI doomers say anything that made sense. And I recall those early days, remember China issued that edict where they said, we're only going to put AIs that are safe and they can't criticize the CCP.
How many super smart people, and I mean super smart, like Zvi, just jump on that and say, see, China's not going to compete with us. We can shut AI down.
They just seem to have zero understanding of some properties of decentralized worlds. Or Eliezer's tweet, was it from yesterday? I didn't think it was a joke, but oh, there's a problem that AI can read all the legal code and threaten us with all these penalties.
It's like he has no idea how screwed up the legal system is. It would just mean courtroom waits of like 70 or 700 years.
It wouldn't become a thing people are afraid of. It would be a social problem in some way.
What's your sense of how the government reacts when the labs are doing, regardless of how they should react, how they will react when the labs are doing like, I don't know, $10 billion training runs? And if under the premise that these are powerful models, not human level per se, but just they can do all kinds of crazy stuff, how do you think the government's going to, are they going to nationalize the labs or staying in Washington? What's your sense? I think our national security people are amongst the smartest people in our government. They're mostly well-intentioned in a good way.
They're paying careful attention to many things. But what will be the political will to do what they don't control? And my guess is until there's sort of an SBF-like incident, which might even not be significant, but a headlines incident, which SBF was, even if it doesn't affect the future evolution of crypto, which I guess is my view, it won't.
Until there's that, we won't do much of anything. And then we'll have an SPF-like incident and we'll overreact.
That seems a very common pattern in American history. And the fact that it's AI, the stakes might be high or whatever, I doubt if it will change the recurrence of that pattern.
How would Robert Nozick think about different AI utopias? Well, I think he did think about different AI utopias, right? So I believe he, whether he wrote or talked about it, but the notion of humans much smarter than they are, or the notion of aliens coming down who are like in some way morally, intellectually way beyond us, he did write about that. And he was worried about how they would treat us.
So he was sensitive to what you would call AI risk, viewed a bit more broadly very early on. What was his take? Well, Nozick is not a thinker of takes.
He was a thinker of speculations and multiple possibilities, which I liked about him. He was worried about it.
This I know. And I talked to him about it, but I couldn't boil it down to a simple take.
It made him a vegetarian, I should add. Wait, that made him? Oh, because we want to be treating the entities that are to us as the ULVT AI's? The way aliens from outer space might treat us, we are like that to animals.
It may not be a perfect analogy, but it's still an interesting point. And therefore, we should be vegetarians.
That was his argument. At least he felt he should be.
I wonder if we should honor past generations more, or at least respect their wishes more. If we think of the alignment problem, it's similar to how we react to our previous generations.
Do we want theIs to treat us as we treat people thousands of years ago?

Yeah, it's a good question.

And I've never met anyone who's consistent with how they view wishes of the dead.

Yeah.

I don't think there is a consistent, philosophically grounded point of view on that one.

I guess the sort of Thomas Paine view, you don't regard them at all.

Is that not self-consistent?

It's consistent, but I've never met anyone who actually lives according to it. Oh, and what's inside the contradicting themselves? Well, say, you know, their spouse were to die and the spouse gave them instructions.
Sure. They would put weight on those instructions.
Somewhere out there, there's probably someone who wouldn't, but I've never met such a person. And how about the Burke view that you take them very seriously? Why is that not self-consistent? The birth view.
What do you mean? Burke view? Oh, well, it's time inconsistent to take those preferences seriously. And Burke himself understood that.
He was a very deep thinker. So, well, you take them seriously now, but as time passes, other ancestors come along.
They have somewhat different views. You have to keep on changing course.
What you should do now, should it be what the ancestors behind us want or your best estimate of what the 30 or 40 years of ancestors to come will want once they have become ancestors? So it's time inconsistent. Again, there's not going to be a strictly philosophical resolution.
There will be practical attempts to find something sustainable. And that which survives will be that which we do.
And then we'll somewhat rationalize it ex post. Yeah, yeah.
There's an interesting book about the ancient, ancient Greeks. What was it called? I forgot the name, but it talks about the hearts that they have for their families, where the dead become gods.
But then over time, if you keep this heart going for hundreds of years, there's like thousands of ancestors that you don't even remember their names, right? Who are you praying to? And then it's like the arrow impossibility theorem for all the gods. What do they all want me to do? And you can't even ask them.
Yeah. Okay, we were talking before we started recording about Argentina and the reforms they're trying there.
And they're trying to dollarize because the dollar is more stable than their currency. But this raises the question of why is the dollar so stable? So we're also a democracy, right? But like the dollar seems pretty well managed.
What is the larger explanation of why monetary policy seems well managed in the U.S.? Well, U.S. voters hate inflation, mostly for good reasons, and we have enough wealth that we can pay our bills without having to inflate very much.
And 2% has been stable now for quite a while. Now, it's an interesting question which I cannot answer, and I have looked into this and I have asked smart people from Argentina.
Why does Argentina in particular have recurring waves of hyperinflation? Is there something about the structure of their interest groups that inevitably, recurringly leads them to demand too much? I suppose. But there's plenty of poor, badly run countries that don't have hyperinflation.
African countries historically have not had high rates of hyperinflation, haven't had high rates of inflation. Why is that? Well, maybe they don't capture enough through senior age for some reason.
Currency holdings aren't large enough. There's some kind of financial repression.
I don't know. But it's very hard to explain why some of these countries, but not others, go crazy with the printing press.
And this is maybe a broader question about different institutions in the government, I don't understand enough to evaluate their like object level decisions. But if you look at the Supreme Court or the Federal Reserve or something, just from a distance, it seems like they're really well run, competent organizations with highly technocratic, you know, nonpartisan people running them.
They're not nonpartisan, but they're still well run. Yeah.
And what's the theory of why these institutions in particular are so much better run? Is it just that they're one step back from direct elections? Is it that they have traditions of knowledge within them? How do we think about this? I think both of those, I don't think the elections point is sufficient because there's plenty of unelected bodies that are totally corrupt around the world. Most of them are, perhaps.
Some sense of American civic virtue that gets communicated. And then the incentives are such, say you're on the Fed for a while, what you can do afterward can be rewarding, but you want a reputation for having done a good job.
So your sense of morality and your private self-interest coincide. And that's pretty strong.
And we're still in that loop. I don't really see signs of that loop breaking.
It's also striking to me how many times I'll read an interesting article or paper and the person who wrote it, it's like the former head of the Federal Reserve in New York or something. It just seems like that's a strong indication of these institutions that...
The standards are very high. And if you speak with any of those people, like we've been on Fed boards, ask them questions, they're super smart, super involved, curious, really, for the most part, do want the best thing for their country.
Going back to these economists, at the end, you talk about how you're kind of disappointed in the turn that economics has taken. Maybe I just want to...
I'm not surprised, right? It's division of labor. Adam Smith, who said it would make people a bit feeble-minded and incurious, was completely correct.
Wait, Adam Smith said what would make people incurious? Division of labor. Oh, I see, right.
Yeah. Not stupid.
You know, current economic researchers probably have never been smarter, but they're way less broad and less curious. Patrick Olson put it in an interesting way where he said, in the past, maybe thinkers were more interested in delving into the biggest questions.
But if they couldn't do it rigorously, in a tractable way, they would make the tradeoff in favor of the big question. And today, we make the opposite tradeoff.
Does that seem like a fair comparison? I think that's correct. And I would add that, say, in the time of Smith, there was nothing you could do rigorously.
So there was no other option. Well, oh, I'm going to specialize in memorizing all the grain prices and run some great econometrics on that, and that'll be rigorous.
It's really William Stanley Jevons who to the Anglo world introduced this notion. There's something else you can do that's rigorous.
It was not yet rigorous, but he opened the door and showed people the alternative. Of the Jevons paradox? I would say his work in statistics, originally on the value of money, but his statistical work on coal also had some rigor, so you're not wrong to cite that.
Jevons just showed that rigorous statistical work and economics could be the same thing. And that was his greater innovation than just marginalism.
So he's an underrated figure. Maybe he should be in the book in a way, but it had some unfortunate secondary consequences.
Too many people crowd into specialization. Crowd is a funny word to use because they're each sitting in their separate nodes, but it's a kind of crowding.

Is there some sort of Hayekian solution here where in markets, the effect of having this sort of decentralized process is that the sum is greater than the parts, whereas in academic

disciplines, the sum is just a bunch of different statistical aggregates. There's no grand theory

that comes together as a result of all this micro work. Is there some Hayekian solution here? Well, yes, you and I are the Hayekian solution that as specialists proliferate, we can be quote unquote parasitic on them and take what they do and turn it into interesting larger bundles that they haven't dreamt of and make some kind of living doing that.
And we're much smaller in number, but I'm not sure how numerous we should be. And there's a bunch of us, right? You're in a separate category, Tyler.
I'm running a podcast here. I run a podcast.
We're not in a separate category. We're exactly in the same category is my point.
And what do you see as the future of the kind of thinking you do? Do you see yourself as the last of the literary economist, or is there a future of this kind of – is it just going to be the Slate Star Codexes? Are they going to take care of it, or this sort of lineage of thinking?

Well, the next me won't be like me. In that sense, I'm the last, but I don't think it will disappear.
It will take new forms. It may have a lot more to do with AI, and I don't think it's going to go away.
There's just a demand for it. There's a real demand for our products.
We have a lot of readers, listeners, people interested, whatever, and there'll be ways to monetize that. The challenge might be competing against AI, and it doesn't have to be that AI does it better than you or I do, though it might, but simply that people prefer to read what the AIs generate for 10 or 20 years.
And it's harder to get an audience because playing with the AIs is a lot of fun. So that will be a real challenge.
I think some of us will be up to it. You'll be faced with it more than I will be, but it's going to change a lot.
Yeah. Okay.
One of the final things I want to do is I want to go into political philosophy a little bit. Okay.
Not that we haven't been doing it already, but yes. Okay.
So I want to ask you about sort of certain potential weaknesses of the democratic capitalist model that we live in. And in terms of both, in terms of whether you think they're object level right, and second, regardless of how right they are, how persuasive and how powerful a force they will be against our system of government and functioning.
Okay. Okay.
So there's a libertarian critique that basically democracy is sort of a random walk with a drift toward socialism. And there's also a ratchet effect where government programs don't go away.
And so it just ends up toward socialism at the end.

it ends up with having government that is too large.

Yeah.

But I don't see the evidence that it's road to serfdom.

France, Sweden have had pretty big governments,

way too large in my opinion,

but they haven't threatened to turn autocratic or totalitarian.

Certainly not.

And you've seen reforms in many of those countries. Sweden moved away from government approaching 70% of GDP, and now it's quite manageable.
Government there should be smaller yet. I don't think the trend is that negative.
It's more of a problem with regulation and the administrative state. But we've shown an ability to create new sectors like big parts of tech.
They're not unregulated. Laws apply to them, but they're way less regulated.
And it's a kind of race. That race doesn't look too bad to me at the moment.
Like we could lose it. But so far, so good.
So the critique should be taken seriously, but it's yet to be validated. How about the egalitarian critique from the left that you can't have the inequality the market creates with the political and moral equality that humans deserve and demand? They just say that.
But what's the evidence? US has high degree of income inequality. so does Brazil, a much less well functioning society.
Brazil continues. On average, it will probably grow 1% or 2%.
That's not a great record. But Brazil has to go up in a puff of smoke.
I don't see it. And how about the Nietzschean critique? At the end of history, Fukuyama says this is more powerful.
This is the one he's more worried about, more so than the leftist critique. Over time, basically what you end up with is the last man and you can't defend the civilization.
You know the story. It's a lot of words.
I mean, is he short the market? I've asked Fukuyama this. He's not – this is a long time ago.
But he wasn't short the market then. Again, it's a real issue.
it seems to me the problems of today, for the most part, are more manageable than the problems of any previous era. We still might all go poof, return to medieval Balkan-style existence in a millennia or whatever.
But it's a fight. And we're totally in the fight.
And we have a lot of resources and talent. So like, let's do it.
Okay. I don't see why that particular worry, it's a lot of words.
And I like to get very concrete, like even if you're not short the market, if that were the main relevant worry, where would that show up in asset prices as it got worse? It's a very concrete question. I think it's very useful to ask.
And when people don't have a clear answer, I get worried. Where does your prediction that hundreds of years down the line will have the $50,000 nukes, where does that show up in the asset prices? I think at some point, VIX, an index of volatility will go up.
Probably not soon. Nuclear proliferation has not gone crazy, which is wonderful.
But I think at some point, it's hard to imagine it not getting out of control. Last I read, VIX is surprisingly low and stable.
That's right. I think 2024 is on the path to be a pretty good year.
Yeah. Or do you think the market is just wrong in terms of thinking about both geopolitical risks from Israel or whatever? No, I don't think the market's wrong at all.
I think that war will converge. I'm not saying the humanitarian outcome is a good one, but in terms of the global economy, I think markets are thinking rationally about it, though the rational forecast, of course, is often wrong.
What's your sense on the scaling stuff when you look at the arguments in terms of what's coming? How do you react to that? Well, your piece on that was great. I don't feel I have the expertise to judge that as a technical matter.
It does seem to me, intuitively, it would be weird on the technical side if scaling just stopped working. But on the knowledge side, I think people underestimate possible barriers.
And what I have in mind is quite a bit of reality. The universe might, in some very fundamental way, simply not be legible.
and that there's no easy and fruitful way to just, quote-unquote, apply more intelligence to the problem. Like, oh, you want to integrate general relativity and quantum mechanics.
It may just be we've hit the frontier, and there's not a final layer of, oh, here's how it fits together. So there's no way to train an AI or other thing to make it smarter to solve that.
And maybe a lot of the world is like that. And that to me, people are not taking seriously enough.
So I'm not sure what the net returns will be to bigger and better and smarter AI. That seems possible for P versus NP type of reasons.
It's just harder to make for further discoveries. But I feel like we have pretty good estimates in terms of like the declining researcher productivity because of

low-hanging fruit being gone in this sort of sense of we're reaching the frontier. And whatever

percent it is a year, if you can just keep the AI population growing faster than that, if you just

want to be crude about it, that seems enough to, if not get to the ultimate physical synthesis, at least much further than where human civilization would get in the same span of time. That seems very plausible.
I think we'll get further. I expect big productivity gains.
As a side note, I'm less convinced by the declining researcher productivity argument than I used to be. So the best way to measure productivity for an economist is wages.
And wages of researchers haven't gone down. Period.
In fact, they've gone up. Now, they may not be producing new ideas.
You might be paying them to be functionaries or to manage PR or to just manage other researchers. But I think that's a worry that we have a lot more researchers with generally rising researcher wages, and that hasn't boosted productivity growth.
China, India, South Korea brought into the world economy, scientific talent. It's better than if we hadn't done it, but it hasn't in absolute terms boosted productivity growth.
And maybe that's a worrisome sign. On the metric of researcher wages, it seems like it could just be a fact that even the less marginally useful improvements are worth the extra cost in terms of if you think of a company, like Google is probably paying as engineers a lot more than it was paying in the early days, even though they're doing less now.
Because changing a pixel in the new Google page is going to affect billions of users. The similar thing could be happening in the economy, right? That might hold for Google researchers, but take people in pharma, biomedicine.
There's a lot of private sector financed research or indirectly financed by buying up smaller companies. And it only makes sense if you get something out of it that really works, like a good vaccine or good medication, Ozempic, super profitable.
So wages for biomedical researchers in general haven't gone down. Now, finally, it's paying off, but I'm not sure AI will be as revolutionary as the other AI optimists believe.
I do think it will raise productivity growth in ways which are visible. To what extent in the conventional agroist story, you think in terms of population size, right? And then so you just increase the population size, you get much more research at the other end.
To what extent does it make sense to think about, well, if you have these billions of AI copies, we can think of that in terms of as a proxy of how much progress they could produce. Is that not a sensible way to think about that? At some point, having billions of copies probably won't matter.
It will matter much more how good is the best thing we have and how well integrated is it into our other systems which have bottlenecks of their own. The principles governing the growth of that are much harder to discern.
It's probably a much slower growth than just juicing up, oh, we've got a lot of these things and they're trained on more and more GPUs. But precisely because the top seems to matter so much is why we might expect bigger gains, right? So if you think about Jews in the 20th century, 2% of the population or less than that, and 20% of the Nobel Prizes, it does seem like you can have a much bigger impact than if you're on the very tail, if you just have just a few hundred John von Neumann copies.
Maybe that's a good analogy, that the impact of AI will be like in the 20th century, the impact of Jews, which would be excellent, right? But it's not extraordinary. It's not a science fiction novel.
It is. I mean, you read the early 20th century stuff as you have, it's like a slow takeoff right there of like, you know, go from B2 rockets to the moon and a couple of decades.
It's kind of a crazy pace of change. Yeah, that's what I think it will be like again.
Great stagnation is over. We'll go back to those earlier rates of change, transform a lot of the world, mostly a big positive, a lot of chaos, disrupted institutions along the way.
That's my prediction. But no one writes a science fiction novel about the 20th century.
It feels a bit ordinary still. Yeah.
Even though it wasn't. I forget the name of the philosopher you asked this to, but the feminist philosopher, you asked the question.
Amiya Srinivasan? Yes. You asked the question, what would have to be different for you to be a social conservative? Right.
What would have to be different for you to not be a doomer per se, but just one of these people who like, this is the main thing to be thinking about during this period of history or something like that? Well, I think it is one of the main things we should be thinking about. But I would say if I thought international cooperation were very possible, I would at least possibly have very different views than I do now.
or if I thought no other country could make progress on AI, those seem unlikely to me, but they're not logically impossible. So the fundamental premise where I differ from a lot of the doomers is my understanding of a decentralized world and its principles being primary.
Their understanding is some kind of comparison, like here's the little people, and here's big monster and the big monster gets bigger. And even if the big monster does a lot of good things, it's just getting bigger and here are the little people.
That's a possible framework. But if you start with decentralization and competition and, well, how are we going to manage this? In some ways, my perspective might be more pessimistic, but you you't just think you can wake up in the morning and legislate safety.
You look at the history of relative safety having come from hegemons, and you hope your hegemon stays good enough, which is a deeply fraught proposition. I recognize that.
What's the next book?

I'm already writing it.

Part of it is on Jevons,

but the title is The Marginal Revolution,

but not about the blog,

about the actual marginal.

But it's maybe a monograph,

like 40,000 words.

But I don't think book length

should matter anymore.

I want to be more radical on that.

I think 40,000 words is perfect because it actually fit in context. So when you do the GPT-4.
Now, context may be bigger by then. Yeah.
But I want to have it in GPT in some way or whatever has replaced it. Okay.
Those are all the questions I had, Tyler. This is a lot of fun.
And keep up the great work and delighted you you're at it thank you thank you uh yeah thanks for coming on the podcast the third time now so a lot of fun okay bye everyone hey everybody i hope you enjoyed that episode as always the most helpful thing you can do is to share the podcast send it to people you think might enjoy it put it it in Twitter, your group chats, etc. It just splits the world.
I appreciate you listening. I'll see you next time.
Cheers.