Tyler Cowen - the #1 bottleneck to AI progress is humans

Tyler Cowen - the #1 bottleneck to AI progress is humans

January 09, 2025 59m

I interviewed Tyler Cowen at the Progress Conference 2024. As always, I had a blast. This is my fourth interview with him – and yet I’m always hearing new stuff.

We talked about why he thinks AI won't drive explosive economic growth, the real bottlenecks on world progress, him now writing for AIs instead of humans, and the difficult relationship between being cultured and fostering growth – among many other things in the full episode.

Thanks to the Roots of Progress Institute (with special thanks to Jason Crawford and Heike Larson) for such a wonderful conference, and to FreeThink for the videography.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.

Sponsors

I’m grateful to Tyler for volunteering to say a few words about Jane Street. It's the first time that a guest has participated in the sponsorship. I hope you can see why Tyler and I think so highly of Jane Street. To learn more about their open rules, go to janestreet.com/dwarkesh.

Timestamps

(00:00:00) Economic Growth and AI

(00:14:57) Founder Mode and increasing variance

(00:29:31) Effective Altruism and Progress Studies

(00:33:05) What AI changes for Tyler

(00:44:57) The slow diffusion of innovation

(00:49:53) Stalin's library

(00:52:19) DC vs SF vs EU



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Listen and Follow Along

Full Transcript

Tyler, welcome. Dwarkesh, great to be chatting with you.
Why won't we have explosive economic growth 20% plus because of AI? It's very hard to get explosive economic growth for any reason, AI or not. One problem is that some parts of your economy grow very rapidly, and then you get a cost disease in the other parts of your economy that, for instance, can't use AI very well.
Look at the U.S. economy.
These numbers are guesses, but government consumption is, what, 18 percent? Health care is almost 20 percent? I'm guessing education is 6 to 7 percent? The non-profit sector, I'm not sure of the number, but you add it all up, that's half of the economy right there. How well are they going to use AI? Is failure to use AI going to cause them to just immediately disappear and be replaced? No, that will take, say, 30 years.
So you'll have some sectors of the economy, less regulated, where it happens very quickly, but that only gets you a modest boost in growth rates, not anything like, oh, the whole economy grows 40% a year, in a nutshell. The mechanism behind cost disease is that there's a limited amount of laborers, and if there's one high productivity sector, then wages everywhere have to go up, so your barber also has to earn twice the wages or something.
With AI you can just have every barber shop with 1,000 times the workers, every restaurant 1,000 times the workers, not just Google. So why would the cost disease mechanism still work here? Cost disease is more general than that.
Let's say you have a bunch of factors of production, say five of them. Now all of a sudden we get a lot more already been happening, to be clear, right? Well, that just means the other constraints in your system become a lot more binding, that the marginal importance of those goes up, and the marginal value of more and more IQ or intelligence goes down.
So that also is self-limiting on growth, and the cost disease, just one particular instantiation of that more general problem that we illustrate with talk about barbers and string quartets and the like. If you were talking to a farmer in 2000 BC and you told them that growth rates were 10x, 100x, you'd have 2% economic growth after the Industrial Revolution, and then he started talking about bottlenecks, what do you say to him in retrospect? He and I would agree.
I hope. I think I would tell him, hey, it's going to take a long time.
And he'd say, hmm, I don't see it happening yet. I think it's going to take a long time.
And we'd shake hands and walk off into the sunset. And then I'd eat some of his rice or wheat or whatever, and that would be awesome.
But the idea that you can have a rapid acceleration in growth rates and that bottlenecks don't just eat it away, I mean, you could agree with that, right? I don't know what the word could means. I would say this.
You look at market data, say real interest rates, stock prices. Right now, everything looks so normal, startlingly normal, even apart from AI.
So what you'd call prediction markets are not forecasting super rapid growth anytime soon. If you look at what experts on

economic growth write, we had Chad Jones here yesterday. He's not predicting super rapid growth, though he thinks AI might well accelerate rates of growth.
So the experts and the markets agree. Who am I to say different from the experts in the market? You're an expert.
Yeah, but I'm with the other experts. In his talk yesterday, Chad Jones said that the main variable, the main input into his model for growth is just population.

If you have a doubling, an order of magnitude increase in the population, you plug that number in, in his model, you get explosive economic growth. I don't agree with his model.
Why not buy the models? His model is far too much a one-factor model, right? Population. I don't think it's very predictive.
We've had big increases in effective world population in terms of purchasing power. A lot of different areas have not become more innovative until the last, say, four years.
Most of them became less innovative. So it's really about the quality of your best people or institutions, as you and Patrick were discussing last night.
And there it's unclear what's happened, but it's also fragile. There's the perspective of the economist, but also that of the anthropologist, the sociologist.
They all matter, but I think the more you stack different pluralistic perspectives, the harder it is to see that there's any simple lever you can push on, intelligence or not, that's going to give you breakaway economic growth. I mean, what you just said, where you're bottlenecked by your best people, seems to contradict what you were saying in your initial answer, that your bottlenecked, even if you boost the best parts, you're going to be bottlenecked by the restaurants and whatever.
You're bottlenecked. You're one of our best people, right? You're frustrated by all kinds of things.
I think I'm going to be making a lot more podcasts after AGI. Okay, good.
I'll listen. I'll be bottlenecked by time.
Just marketing. Here's a simple way to put it.
Most of sub-Saharan Africa still does not have reliable clean water. The intelligence required for that is not scarce.
We cannot so readily do it. We are more in that position than we might like to think, but along other variables.
And taking advantage of the intelligence from strong AI is one of those. So about a year ago, your co-writer on Marshal Revolution, Alex Tavarok, had a post about the extreme scarcity of high IQ workers.
And so if the labor force in the United States is 164 million people, if one in a thousand of them are geniuses, you know, you have 164,000 geniuses. That's why you have to do something conductors in Taiwan, because that's where they're putting their nominal amount of geniuses.
We're putting ours in finance and tech. If you look at that framework, I mean, come on,

we have a thousand times more of those kinds of people

and at the end of the day,

the bottlenecks are going to eat all that away?

Or if you ask any one of these people,

if you had a thousand times more of your best colleague,

your best coworker, your best co-founder,

the bottleneck's going to eat all that away,

your organization isn't going to go any faster?

I didn't agree with that post.

If you look at labor market data,

the returns to IQ as it translates into ages, they're amazingly low. They're pretty insignificant.
And people who are very successful, they're very smart, but they're people who have, say, eight or nine areas where they're like, on a scale of one to 10, they're a nine. Like they have one area where they're just like an 11 and a half on a scale of 1 to 10.
And then on everything else, they're an 8 to a 9 and have a lot of determination. And that's what leads to incredible success.
And IQ is one of those things, but it's not actually that important. It's the bundle, and the bundles are scarce, and then the bundles interacting with the rest of the world.
Like just try going to a mid-tier state university and Sit down with the committee designed to develop a plan for using artificial intelligence in the curriculum And then come back to me and tell me how that went and then we'll talk about bottlenecks They will write a report the report will sound like GPT-4 and we'll have the report These the report will not be bottlenecked. promise you.
These other traits, look, the AIs are, if it's conscientiousness, if it's pliability, whatever, the AIs will be even more conscientious. They'll work 24-7 and they'll like, if you need to be deferential to the FDA, they'll write the best report the FDA has ever seen and they'll get things going along.
With these other traits, they're not going to be bottlenecked by them, right? They'll be smart and they'll be conscientious. That I strongly believe.
Look, I think they will boost the rate of economic growth by something like half a percentage point a year. Over 30, 40 years, that's an enormous difference.
It will transform the entire world. But in any given year, we won't so much notice it.
And a lot of it is something like a drug that might have taken 20 years. Now it will come in 10 years.
But at the end of it of it all is still our system of clinical trials and regulation. And if everything that took 20 years takes 10 years, over time, that's an immense difference.
But you don't quite feel it as so revolutionary for a long time. So the whole vibe of this progress studies thing is, look, we've got all these low-hanging fruits or medium-hanging fruits that if we fix our institutions, if we made these changes to regulations to institutions, we could rapidly boost the rate of economic growth.
And you're okay, so we can shift fix the NIH and get increases in economic growth, but we have a billion extra people, 10 billion extra people, the smartest people, the most conscientious people, and that has an iota of difference of economic growth. Isn't there a contradiction between how much the rate of economic growth can increase between these two perspectives? There's diminishing marginal returns to most of these factors.
So a simple one is how it interacts with regulation, law, and the government. Another huge one is energy usage.
How good is our country in particular at expanding energy supply? I've seen a few encouraging signs lately with nuclear power. That's great.
Most places won't do it. And even those reports, exactly how many years it will take, I know what the press releases say, we'll see, you know, could be 10 years or more.
And that will just be a smidgen of what we'll need to implement the kind of vision you're describing. So yeah, they're going to be bottlenecks all along the way, the whole way.
And it's going to be a tough slog, like the printing press, like electricity. The people who study diffusion of new technologies never think there will be rapid takeoff.
So my view is kind of like I'm always siding with the experts. So economists, social scientists, most of them are blind and asleep to the promise of strong AI.
They're just out to lunch. I think they're wrong.
I trust the AI experts. But when you talk about, say, diffusion of new technologies, the people who do AI are basically totally wrong.
The people who've studied that issue, I trust the experts. And if you put together the two views or in each area you trust the experts, then you get my view, which is amazing in the long run, will take a long time, tough slog, all these bottlenecks in the short run.
And the fact that there's like a billion of your GPT whatevers, which I'm all in love with, I promise you, it's gonna take a while. What would the experts say if you said, look, we're gonna have, forget about AI, because I feel like when people hear AI, they think of GPT-4, not the humans, not the things that are gonna be as smart as humans.
So what would the experts say if you you said tomorrow the world population, the labor force is going to double? What impact would that happen? Well, what's the variable I'm trying to predict? If you mean energy usage, that's going to go up, right? Over time, it's probably going to double. Growth rate? I'm not sure it'd be a noticeable difference.
Doubling the world population? Yeah, I'm not sure. I don't think the Romer model has been validated by the data.
And I don't agree with the Chad Jones model, much as I love him as an economist. I don't think it's that predictive.
I mean, look at artistic production in Renaissance Florence. There's, what, 60,000 people in the city, the surrounding countryside.
But it's that so many things went right at the top level that it was so amazing in terms of still value added today. And the numbers model doesn't predict very well.
The world economy today is some 100 trillion something. If the world population was one-tenth of what it is now, if we only had 1 billion people, 100 million people, you think we could have the world economy at this level with our level of technology? No, the delta's a killer, right? This is one thing we learned from macro.
The delta and the levels really interact. So shrinking can kill you.
Just like companies, nonprofits, if they shrink too much, often they just blow up and disappear. They implode.
But that doesn't mean that growing them gets you 3x, 4x, whatever, proportional to how they grow. It's oddly asymmetric.
It's very hard to internalize emotionally that intuition in your understanding of the real world, but I think we need to. What are the specific bottlenecks? Humans, here they are.
Bottleneck, bottleneck. Hi, good to see you.
And some of you are terrified. You're going to be even bigger bottlenecks.
That's fine. It's part of free speech.
But my goodness, once it starts changing what the world looks like, there will be much more opposition, not necessarily from what I call doomster grounds, but just people like, hey, I see this has benefits, but I grew up, trained my kids to live in some other kind of world. I don't want this.
And that's going to be a massive fight. I really have no prediction as to how it's going to go, but I promise you that will be a bottleneck.
But you can see even historically, you don't have to go from the farmers to industrial revolution 10x. You can just look at actually cases in history where we have had 10x rates of, sorry, 10% increase rates of economic growth.
You go to China after Deng Xiaoping, they have decades of 10% economic growth. And then that just because you can do some sort of catch-up.
The idea that you can't replicate that with AI or that it's not infeasible. Where were the bottlenecks when Deng Xiaoping took over? They're in a mess now.
I'm not sure how it's going to go for them. They're just a middle-income country.
They struggled to tie per capita income with Mexico. I think they're a little ahead of Mexico now.
They're the least successful Chinese society in part because of their scale. Their scale is one of their big problems.
There's this fear that if they democratize and try to become a normal country, that the median voter won't protect the interests of the elites. So I think they're a great example of how hard it is for them to scale because they're the poorest group of Chinese people on the planet.
I mean, not the challenges now, but the fact that for decades they did have 10% economic

growth. In some years, 15%.

Well, starting from a per capita income of like $200 per head.

And now they're Mexico.

Their ancestors were going to be like as poor as the Chinese, you know, like 30 years ago.

I'm very impressed by the Industrial Revolution.

Like you could argue progress, oral progress studies here, most important event in human

history maybe. Typical rate of economic growth during that period was about one and a half percent.
And the future is about compounding and sticking with it. And, you know, seeing things pay off in the long run, just human beings are not going to change that much.
And I don't think that property of our world will change that much, even with a lot more IQ and conscientiousness. I interviewed you like nine months ago, and I was asking you about AI then, and I think your attitude was like, eh.
And I think now, I don't know, has your attitude changed since we talked about nine months ago? You know, I don't remember what I thought in what month, but I would say on the whole, I see more potential in AI than I did a year ago, and I think it has made progress more quickly than I had been expecting. And I was pretty bullish on it back then.
The O1 model to me is very impressive. And I think further extensions in that direction will make a big, big difference.
And the rate at which they come is hard to say, but it's something we have and we just have to make it better. You showed me your document of different questions that you came up with for o1 for economic reasoning i don't think it was for gpt4 okay yeah but what percentage of them did o1 get right because i don't think i got a single one of those right those questions were too you know they were too easy they were for gpt4 and it's like abandoned those questions you know 100 of economics, how well does a human do on them? They're hard, but it's like pointless.
So I would not be shocked if somebody's AI model in less than three years, you know, beat human experts on a regular basis. Let's put it that way.
Did that update you in any way that now you've resigned on these questions because they were too easy for these models? And in the initial, like, they are hard questions objectively, right? They're just easy for a one. I feel like Kasparov, the first time he met Deep Blue, you know, there were two matches and the first one Kasparov won.
And I lived through that first match. I feel like I'm sort of in the midst of the first match right now, but I also remember the second match.

And in the final game, Kasparov made that bonehead error in the Karakhan defense.

That too was a human bottleneck.

And he lost the whole match.

So we'll see what the rate of change is.

Yesterday, Patrick was talking about how important it is for the founders of different institutions

to hang around and be the ones in charge.

I've heard you talk about, like, you know,

the Beatles were great because the Beatles were running the Beatles.

Why do you think it's so important for that to be the case?

I think courage is a very scarce input in a lot of decisions.

And founders, they have courage to begin with,

but they also need less courage to see through a big change

in what the company will do.

So Facebook now Meta has made quite a few big changes in its history. So Mark had a lot of courage to begin with.
But if Mark Zuckerberg says, we're going to do this, we're going to do that, it's pretty hard for everyone else to say no in a good way. I really like that.
So it economizes on courage, having a founder, and you're selecting for courage. Those would be two reasons.
How does that explain the Beatles' success? Well, the Beatles are an interesting example. I mean, they broke up in 1970, right? Rolling Stones are still going.
That tells you something. But the Beatles created much greater value, and the Beatles are the group we still all talk about much more, even though the Rolling Stones are still with us.
So they were always unstable. There's like two periods of the Beatles.
Early Beatles, John is the leader, but then Paul works at it, and John becomes a heroin addict, and Paul gets better, better, better, and ultimately there's no core. There's not a stable equilibrium.
The Beatles split up, but that creative tension for like those core seven to eight years was just unbelievable, and it's four founders. Ringo, not quite a founder, but basically a founder because Pete Best was crummy and they got rid of him right away it's one of the most amazing stories in the world I like studying these amazing productivity stories like Johann Sebastian Bach Magnus Carlson Steph Curry the Beatles I think they're worth a lot of study they're atypical you can't just say okay oh I'm gonna be'm going to be like the Beatles.
Like, you're going to fail. The Beatles did that.
But nonetheless, I think it's a good place to look for getting ideas and seeing risks. Hello, everyone.
This is Tyler Cowen, and I would like to personally thank Jane Street for sponsoring this podcast episode with Dworkish Patel. I've been visiting Jane Street for some number of years.
They're renowned for their brainy, challenging environment and also for their ability to spot and recruit talent. Those are some of the reasons why, for me, those are the trips and the visits I look forward to the most.
I would just say this. If it is an appropriate option for you, please do consider working there.
I've always had a blast during my visits, learned a lot, and I recall one time when I gave a talk, we all went out to dinner, and then quite late, well, people didn't go back home, but they all went back to the Jane Street office to play chess, Bug House, and other games. They're better at these games than you might think, so please update your other expectations accordingly.
Thank you again. What did you think of Patrick's observation of the competency crisis? I see it differently from Patrick, and he and I have discussed this.
So I think there's basically increasing variance in the distribution. So young people at the top are doing much better, and they're far more impressive than they were in earlier times.
And if you look at areas where we measure performance of the young, chess is a simple example. We perfectly measure performance.
Very young people are just better and better at chess. That's proven.
Even like NBA basketball, you have very young people doing things that they would not have been doing, say, 30 years ago. And a lot of that is mental and training and application and not being a knucklehead.
So the top of the distribution getting much better. You see this also in science, internet writing.
The very bottom of the distribution, well, youth crime has been falling since the 90s. So the very bottom of the distribution also is getting better.
I think there's some thick middle above the very bottom and extending like a bit above the median that's clearly getting worse. And because they're getting worse there's a lot of anecdotal examples of them getting worse.
Like students wanting more time to take the test or having you know flimsy excuses or mental health problems with the young or whatever. It's a lot more of that because of that thick band of people getting worse.
And that's a great concern.

But I see the very bottom and a big chunk

of the top of the distribution

as just much better.

And I think it's pretty proven

by numbers that that's the case.

So I would say

this increasing variance

with the weird mix

of where the gains

and declines are showing up.

And I've said this to Patrick

and I'm going to say it to him again

and I hope I can convince him.

It seems concerning then the composition is that the average goes down if you look at PISA scores or something. The median goes down.
You know, a lot of tests, they've pushed more people into taking the test, PISA scores in particular. So I suspect those scores adjusted for that are roughly constant, which is still not great.
I agree. And I think there's some decline.
Some of it is pandemic and we're recovering a bit slowly, getting back to human bottlenecks. But I think a lot of the talk of declining test scores is somewhat overblown.
At most, there's a very modest decline, I would say. If the top is getting better, what do you make of the anecdotal data he was talking about yesterday where the Stanford kids come up to him and say, you know, all my friends, they're stupid.
You can't hire anybody from Stanford anymore. That should be the cream of the crop, right? There's plenty of data on the earnings of Stanford kids.
If there were a betting market in, you know, what's the future trend? I'm long. How long I should be, I really don't know.
But I visit Stanford, not every year, but, you know, regularly. And there's selection and who it is I meet.
But yeah, we're talking about selection and they they're very impressive and Emergent Ventures has funded a lot of people from Stanford as far as I can tell as a group they're doing very well so that is of no concern to me if like you're worried about the Stanford kids like something seems off in the level of salience and focus in the argument because they're overall doing great and that you know have high standards that's good too like you know paul mccartney thought john lennon was a crummy guitar player and john thought a lot of paul's songs were crap like in a way they're right in a way they're wrong but it's a sign what high standards the beatles had yeah i mean you'd hope how old are you by the way 24 okay now go back whenever many years. Was there a 24-year-old like you doing the equivalent of podcasting? It's just clearly better now than it was back then.
And you were doing this a few years ago. So it's just obvious to me.
The young peaks are doing better. And you're proof.
Wasn't Churchill, by the time he was 24, an international correspondent, Cuba, India, and was, I think, the highest paid journalist in the world by the time he was 24? I don't know. I don't know.
I mean, what was he paid, and how good was his journalism? I just don't know. I don't think it's that impressive a job to be an international journalist.
Like, what does it pay people now? He did some good things later on, but most of his early life,

he's a failure.

And then ask the Irish, getting back to Patrick,

ask the Irish and people from India what they think of younger Churchill,

and you'll get an earful.

Like his real great achievement, I don't know how old he was exactly,

but it's quite late in his life. And until then, he's a destructive failure.
There was no one on Twitter to tell him,

hey, Winston, you need to rethink this whole Irish thing. And today there would be.

Sam, Sam will do it, right? Sam will tweet at Winston Churchill, got to rethink the Irish thing.

And Sam is persuasive. If you read his aphorisms, I think he would have actually been

Thank you. Sam, Sam will do it, right? Sam will tweet at Winston Churchill, got to rethink the Irish thing.
And Sam is persuasive. If you read his aphorisms, I think he would have actually been pretty good on Twitter.
Maybe, but again, you know, like what does the equilibrium look like when everything changes? But clearly he was an impressive guy, especially given how much he drank. Okay, so even if you don't buy the Stanford kids, if you don't buy the young kids, the other trend he was talking about, where if you look at the leaders in government, whatever you think of Trump, Biden, we're not talking about Jefferson and Hamilton anymore, right? How do you explain that trend? Well, Jefferson and Hamilton, they're founders, right? And they were pretty young at the time.
You can do great things when you're founding a country in a way that just cannot be replicated later on. Putting aside the current year, which I think is weird for a number of reasons, but I think mostly we have had impressive candidates, and most of the U.S.
bureaucracy in Washington I think is pretty impressive. Generals, national security people, top people in agencies, people at treasury, people at the Fed, and I interact with these groups like pretty often.
Overall, they're impressive and I've seen no signs they're getting worse. Now, if you want to say the two candidates this year, again, there's a lot, something we're not going to talk about, but there is a lot you could say on the negative side, yes.
But like Obama, Romney, whichever one you like, I think like, gee, these are two guys who should be running for president, and that was not long ago. So then there's a bunch of candidates running who are good.
What goes systematically wrong in the selection process is the two who are selected are not even as good as the average of all the candidates. You mean this? And I'm not talking about America in particular.
If the theory is just like noise, it seems like it skews one way. Well, the Democrats had this funny path with Biden, and Kamala didn't get through the electoral process in the normal way.
So that just means you get weirdness, whatever you think of her as a candidate. Trump, whom I do not want to win,

but I think he is extraordinarily impressive in some way,

which along a bunch of dimensions exceeds a lot of earlier candidates.

I just don't like the overall package.

But I would not point to him as an example of low talent.

I think he's a supreme talent, but harnessed to some bad ends.

If you look at the early 20th century, some of the worst things that happened to progress is just these psychopathic leaders. What happened? Why did we have so many just awful, awful leaders in the early 20th century? Well, give me a country and a name and a time period, and I'll try to answer it.
You mean Woodrow Wilson? He was one of them in particular. He was from the university.
That's what was wrong with him, right? And just think of what school it was. Who? Sorry? Woodrow Wilson.
Yeah. One of our two or three worst presidents on civil rights.
World War I, he screwed up. The peace process, he screwed up.
Indirectly, he led to World War II. Reintroduced segregation of civil service in some key regards.
And just seemed he was a nasty guy and should have been out of office sooner, given his health and state of mind. So he was terrible.
But he was sort of, on paper, a great candidate. Hoover, on paper, was a great candidate and was an extremely impressive guy.
I think he made one very bad set of decisions relating to deflation and letting nominal GDP fall. But my goodness, there's a reason they called it the Hoover Institution after Hoover.
But the Hitler-Stalin-Maus, was there something that was going on that explains why that was just a crummy time for world leaders? I don't think I have a good macro explanation of that whole trend, but I would say a few things. That's right after the period where the world is changing the most.
And I think when you get big new technologies, and this is relevant for AI, you get a lot of new arms races. And sometimes the bad people win those arms races.
So at least for quite a while, you had Soviet Russia and Nazi Germany winning some arms races. And they're not democratic systems.
Later you

have China with Mao being not a democratic system and then you have a mix of bad luck. Like Stalin and Mao just draw the urn.
You could have gotten less crazy people than what you got. And I agree with Hayek, the worst get to the top under autocracy, but like they're that bad? Like that was just some bad luck too.
There's other things you could say, but I think we had a highly disoriented civilization, you see it in aesthetics approaching beginnings of World War I, art and music radically changing, people feel very disoriented, there's a lot up for grabs, imperialism, colonialism start to be factors, just there wasn't like a stable world order, and then you had some bad luck tossed into that. And all of a sudden, these super destructive weapon systems compared to what we had had.
And it was awful. I'm not pretending that's some kind of full explanation, but that would be like a partial start.
You compared our current period to 17th century England, where you have a lot of new ideas, things go topsy-turvy. What's your theory of why things go topsy-turvy at the same time when these eras come about? What causes this volatility? I don't think I have a general theory.
If you want to talk about 17th century England, so they have the scientific revolution. You have the rise of England as a true global power.
Navy becomes much more important. The Atlantic trade route, because of the new world, becomes much more important.
Places like the Middle East, India, China, that were earlier, you know, Persia had major roles. They're crumbling partially for reasons of their own, and that's going to help the relative power of the more advanced European nations.
England has a lot of competition from, you know, the Dutch Republic, France, happening at the same time that for the first time in human history that I know of, we have sustained economic growth, according to Greg Clark, starting in the 1620s of about 1% a year. And that is compounding again, slow numbers, but compounding.
And England is the place that gets the compounding at 1%, starting in the 1620s. And somehow they go crazy, Civil War, kill the king, all these radical ideas, libertarianism comes out of that, which I really like, John Milton, John Locke, also this brutal conquest of the new world, like very good and very bad coming together, and I think it should be seen as these set of processes where very good and very bad come together, and we might be in for a dose of that again now, soon.
Seems like a simple question, but basically, how do you make sure you get the good things and not the crazy civil wars? You can make sure. I mean, you try at the margin to nudge toward the better set of things, but it's possible that all the technical advances that recently have been unleashed, now that the great stagnation is over, which of course, include AI, will mean another mean another crazy period it's quite possible I think the chance of that is reasonably high what's your most underrated cult most underrated cult progress studies I think you called peak EA right before SBF fell that's right I was at an at an EA meeting and I said, you know, hey, everyone, this is as good as it gets.
Enjoy this moment. It's all basically going to fall apart.
You're still going to have some really significant influence, but you won't feel like you have continued to exist as a movement. That's what I said.
And they were shocked. They thought I was insane.
But I think I was mostly right. And what did you see? Is an exuberance too high? Did you see SBF's balance sheet? What did you see? Well, I was surprised when SBF was insolvent.
I thought it was a high-risk venture that had no regulatory defense and would end up being worth zero. But I didn't think he was actually playing funny games with the money.
I just have a long history of seeing movements in my lifetime from the 1960s onwards, including libertarianism, and there are common patterns that happens to them all. We're here in Berkeley, my goodness.
Free speech movement? Where's free speech in Berkeley today? Like how'd that, you know, work out in the final analysis? So it's a very common pattern. And just to think, wow, the common pattern is going to repeat itself.
And then you see some intuitive signs and you're just like, yeah, that's going to happen. And the private benefits of belonging to EA, like they were very real in terms of the people you could hang out with or like the sex you could have, but they didn't seem that concretely crystallized to me in institutions the way they are like in Goldman Sachs or legal partnerships so that struck me as very fragile and I thought that at the time as well.
Sorry I'm not sure I understood what were the intuitive signs? Well not seeing like the very clear crystallized permanent incentives to keep on being a part of the institutions a bit of excess excess enthusiasm from some people, even where they might have been correct in their views. Some cult-like tendencies, the rise of it being so rapid that it was this uneasy balance of secular and semi-religious elements that tends to flip one way or the other or just dissolve.
So I saw all those things, and I just thought,

like the two or three best ideas from this

are going to prove incredibly important still,

and from this day onwards.

I don't give up that belief at all.

But just as a movement, I thought it was going to collapse.

When did we hit peak progress studies?

You know, when Patrick and I wrote the piece

on progress and progress studies,

he and I thought about this, talked about it. I can't speak for him, but my view at least was that it would never be such a formal thing or like controlled or managed or directed by a small group of people or like trademarked or it would be people doing things in a very decentralized way that would reflect a general change of ethos and vibe.
So I hope it has in many ways like a gentler but more enduring trajectory. And I think so far I'm seeing that.
Like I think in a lot of countries, science policy will be much better because of progress studies. That's not proven yet.
You see some signs of that. You wouldn't say it's really flipped, but a lot of reforms.
You're in an area like no one else has any idea, much less a better idea or a good idea. And some modestly small number of people with some talent will work on it and get like a third to half of what they want.
And that will have a huge impact. And like, if that's all it is, I'm thrilled.
And I think it will be more than that. I asked Patrick yesterday, you know, how do you think about progress studies differently now that you know AI is a thing that's happening? Yeah.
What's the answer for you? I don't think about it very differently. But again, if you buy my view about like slow takeoff, why should it be that different? Well, have more degrees of freedom.
So if you have more degrees of freedom, all your choices, decisions, issues, problems are more complex. So you're in more need of like some kind of guidance.
So all inputs other than the AI, like rise in marginal value. And since I'm an input other than the AI, or I hope that means I rise in marginal value, but I need to do different things.
So I think of myself over time as less a producer of content and more like a connector, people, person, developing networks in a way where if there somehow had been no like transformers and LLMs, I would have stayed a bit more as a producer of content. When I was preparing to interview you, I asked Claude to take your persona and compared to other people I tried this with, it actually works really well with you.
Because I've written a lot on the internet. Yeah, that's why.
This is my immortality, right? That's right. So I've heard you say in the past, you know, you don't expect to be remembered in the future.
At the time, I don't know if you were considering that because of your volumes of text, you're going to have an especially salient persona and future models. How does that change your estimation of your intellectual contribution going forward?

I do think about this. And the last book I wrote, you know, it's called Goat, who's the greatest economist of all time.
I'm happy if humans read it, but mostly I wrote it for

the AIs. I wanted them to know I appreciate them.
And my next book, I'm writing even more

for the AIs. Again, human readers are welcome.
It will be free. But sort of, oh, who reviews it? Like, oh, is TLS going to pick it up? Like, it doesn't matter anymore.
The AIs will trawl it and know I've done this and that will shape how they see me, and I hope a very salient and important way. And far as I can tell no one else is doing this no one is like writing or recording for the AIs very much but if you believe even like a modest version of this progress like I'm modest in what I believe relative to you and many of you like you should be doing this you're an idiot if you're not writing for the AIs they're a big part of your audience, and they're purchasing power.

We'll see, but over time it will accumulate,

and they're going to hold a lot of crypto.

We're not going to give them bank accounts, at least not at first.

What part of your persona will be least captured by the AIs

if they're only going by your writing?

I think I should ask that as a question to you.

What's your answer?

I don't think AIs are that funny yet. They're better on humor than many people allege, but I don't use them for humor.
It's interesting that you learn so much about a person from when you're doing them for a job or you for a merchant ventures. You can read their application, but just in the first 10 minutes, their vibe.
Three minutes, but yes. Yes.
And so whatever's going on there that's so informative, the AIs won't have just from the writing. Not at first, but I think I've heard of projects.
This is secondhand. I'm not sure how true it is.
But that interviews are being recorded by companies that do a lot of interviews. And these will be fed into AIs and coded in particular ways.
And then people, in essence, will be tracked through either their progress in the company or a LinkedIn profile. And we're going to learn something about those intangibles at some rate.
I'm not sure how that will go. But I don't view it as something we can never learn about.
Do you actually have a conscious theory of what's going on when you get on a call with somebody and three minutes later, you're like, you're not getting the grant? What happens? Well, often there's like one question the person can't answer. So if it's someone say applying with a non-profit idea, plenty of people have good ideas for non-profits and I see these all the time.
But when you ask them the question how is it you think about building out your donor base? It's remarkable to me how many people have no idea how to answer that. And without that you don't't have anything.
So it depends on the area, but that would be an example of an area where I ask that question pretty quickly, and a significant percentage can't answer it, and I'm still willing to say, well, come back to me when you have a good case. Oddly, none of those people have ever come back to me that I can think of, but I think over time some will.
And that's like a very concrete thing. But there's other intangibles.
Just when you see what the person thinks and talks about too much. So like if someone wants to get an award only for their immigration status, that to me is a dangerous sign.
Even though at the same time, usually you're looking for people who want to come to the U.S., whether they can do it or not. And there's just a lot of different signals you pick up, like people somehow have the wrong priorities, or they're referring to the wrong status markers.
And it comes through more than you would think. If you had the transcript of the call, but you couldn't see the video, you would say a no in the case where you could see the video.
You might say yes if you see see the transcript what happens in those cases uh having only the transcript would be worth much much less i would say if that's what you're asking yeah it would be maybe 25 of the value and what's going on with the 75 we don't know but i think you can become much better at figuring out the other 75 partly just with practice yesterday patrick was talking concentrations of talent that he sees in the history of science with these labs that have six, seven Nobel Prizes. And he was also talking about, you know, second employee at Stripe as Greg Brockman.
He wasn't visible to other parts of the startup ecosystem in the same way. What's your theory of what's going on? Why are these clusters so limited? What's actually being inherited over and transmitted here? Well, Patrick was being too modest.
I thought his answer there was quite wrong, but he sort of knows better. He was able to hire Greg Brockman because he's Patrick.
It's very simple. He wasn't going to come out and just say that, and he may even deny it a bit to himself.
But if you're Patrick and John, you're going attract some Greg Brockmans. And if you're not, it's just way, way harder because the Greg Brockmans are pretty good at spotting who are the Patricks and Johns.
So in a way that's just pushing it back a step, but at least it's answering part of the question in a way that Patrick didn't because he was modest and humble. It seems like that makes the clusters less valuable then because if Greg Brockman is just Greg Brockman and Greg chose Patrick and John because they're Patrick and John and Patrick and John chose Greg because he's Greg.
It wasn't that they meet each other great. It was just like talent sees talent, right? Well, they make each other much better just like Patrick and John made each other much better and still do.
But you're getting back to my favorite human bottlenecks. Thank you.
I'm fully on board with what you're saying. To get those, like how many beetles are there? It's amazing how much stuff doesn't really last.
And it's just super scarce achievement at the very highest levels. And that's this extreme human bottleneck.
And AI, even a pretty strong AI, remains to be seen how much it helps us with that. I'm guessing ever since you wrote the Progress Studies article, you got a lot of applications for emerging ventures from people who want to do some progress studies thing.
On the margins, do you wish you got fewer of those proposals or more of them? Or do you just wish they were unrelated? I don't know. To date, a lot of them have been quite good, and many of them are people who are here.
There's a danger that as the thing becomes more popular, you know, at the margin, they become much worse, and I guess I'm expecting that. So maybe mentally, I'm raising my own bar on those.
And maybe over time, I find it more attractive. If the person is interested in, say, like the Industrial Revolution, if they're interested in Progress Studies, capital P, capital S, like over time I'm growing more skeptical of that.
Not that I think there's anything intrinsically bad about it, like I'm at a Progress Studies conference with you, but still when you think about selection and adverse selection, I think you've got to be very careful and keep on raising the bar there. And it's still probably good if those people do something in capital P, capital S progress studies,

but it's not necessarily good for emerging ventures to just keep on funding the number.

If you buy your picture of AI where it increases growth rates by half a percentage point,

what does your portfolio look like?

I can tell you what my portfolio is.

It's a lot of diversified mutual funds with no trading. Basically, pretty heavily US weighted and nothing in it that would surprise you.
Now my wife works for the SEC so we're not allowed to do most things. Like even to buy most individual stocks, you may not be allowed to do it.
Certainly not allowed derivatives or shorting anything. But if somehow

that restriction were not there, I don't think it would really matter. So buy and hold, diversify,

hold on tight, and make sure you have some cheap hobbies and are a good cook.

Why aren't you more leveraged if you think the growth rate's going to go up,

even slightly? Well, I think I also have this view, maybe a lot of the equity premium is in

the past, that people, especially in this part of the world, are very good at internalizing value

Thank you. Well, I think I also have this view, maybe a lot of the equity premium is in the past, that people, especially in this part of the world, are very good at internalizing value, and it will be held and earned in private markets and by VCs rather than like public pension funds.
Why give it to them? I think Silicon Valley has figured this out. Sand Hill Road has figured it out.
So what one can do with public equities is unclear. What private deals I can get on with my like really tiny sum of wealth, like I would say is pretty clear.
So I'm left with that. And like money for me is not what's scarce.
Time is scarce. And I do have some very cheap hobbies.
And I feel I'm in very good shape in that regard. That being said, I think you could get pretty good deal flow.
You wouldn't do a portfolio. I don't know.
You can only focus on so many things. So if I have good deal flow in Emergent Ventures, which I'm not paid to do, say I had a billion dollars from whatever, I wouldn't have any better way of spending that billion dollars than buying myself a job job doing immersion ventures or whatever.
So I'm sort of already where I would be if I could buy the thing for a billion dollars. So I'm just not that focused on it.
And I think it's good that you limit your areas of focus. And if some people, it's just money.
Like, I think that's great. I don't begrudge them that at all.
I think it's socially valuable. Let's have more of it.
Bring it on. But it's just not me.
When I started my career, it was really unknown that an economist could really earn anything at all. Like there were no tech jobs with billionaires.
Finance was a low-paying field. Like when I started choosing a career, it was not a thing.
There wasn't this fancy Goldman Sachs. It was a slow boring thing programmers were weird people in basements like maybe who knows what you know that bad stuff and then like an economist you would earn like back then maybe forty thousand dollars a year like two people middle friedman paul samuelson at outside income and you would know expectation that you would ever earn more than that and i went into into this with all of that.
Like, relative to that, I feel so wealthy. Just like, oh, you can sell some books, or you can give a talk.
I don't know. I just feel like I am a billionaire now.
And if anything, I want to become what I've called an information trillionaire. I'm not going to make that, but I think it's a good aspiration to have.
Just collect more information and be an information trillionaire. Like Dana Gioia has that same goal.
He and I have talked about this. I think that's a very worthy goal.
Was there a second field that you were considering going to other than economics? It was either economics or philosophy. And I saw back then, this would be like the late 1970s, it was much harder to get a job as a philosopher though not impossible the way it sort of is now and they were paid less and just had fewer opportunities so I thought well I'll do economics but I think in a way I've always done both okay I really want to go back to this diffusion thing we're talking about at the beginning with the economic growth yeah because I feel like I don't what am I not understanding I hear the diffusion, I hear the word bottlenecks, but I just don't have anything concrete in my head when I hear that.
What are the people who are thinking about AI missing here when they just plug in these things into their models? I'm not sure I'm the one to diagnose, but I would say when I'm in the Bay Area, like the people here to me are the smartest people I've ever met on average. Most ambitious, dynamic and smartest, like by a clear grand slam compared to New York City or London or anywhere.
That's awesome and I love it. But I think a side result of that is that people here overvalue intelligence and their models of the world are built on intelligence mattering much, much more than it really does.
Now, people in Washington don't have that problem. We have another problem, and that needs to be corrected, too.
But I just think if you could root that out of your minds, it would be pretty easy to glide into this expert consensus view that tech diffusion is

pretty universally pretty slow and it's that's not going to change no one's

built a real model to show why it should change other than sort of hyper

ventilating blog posts about everything's going to change right away the

model is that you can have AIs make more AIs right that you can have

returns Ricardo knew this right he didn't call it AI but Malthus Ricardo

I'm say. But they understood the pessimism intrinsic in diminishing returns in a way that people in the Bay Area do not, and it's better for them that they don't know it, but if you're just trying to inject truth into their veins rather than ambition, diminishing returns is a very important idea.
In what sense was that pessimism correct? Because we do have 7 billion people and we have a lot more ideas as a result, we have a lot more industries. Yeah, I said they were too pessimistic, but they understood something about the logic of diffusion, where if they could see AI today, I don't think they would be so blown away by it.
Oh, you know, I read Malthus. Ricardo would say, Malthus and I used to send letters back and forth.
We talked about diminishing returns. This will be nice.
It'll extend the frontier, but it's not going to solve all our problems. One concern you could have about progress in general is if you look at the famous Adam Smith example, you've got that pin maker,

and the specialization obviously has all these efficiencies, but the pin maker is just like he's

doing this one thing. Whereas if you're in the ancestral environment, you get to basically

negotiate with every single part of what it takes to keep you alive and every other person in your tribe. Does individuality, is that like lost with more specialization, more progress? Well, Smith thought it would be.
I think compared to his time, we have much more individuality, most of all in the Bay Area. That's a good thing.
I worry, you know, the future with AI, that a kind of demoralization will set in in some areas. I think there'll be full employment pretty much forever.
That doesn't worry me. But what we will be left doing, what exactly it will be and how happy it will make us.
Again, I don't have pessimistic expectations. I just see it as a big change.
I don't feel I have a good prediction. And if you don't have a good prediction, you should be a bit wary and just like, oh, okay, we're going to see.
But some words of caution are merited. When you're learning about a new field, the vibe I get from you when you're doing a podcast is like you're picking up like the long tail of different you talk to interesting people

or you read the book that nobody else would have considered

how often do you just have to like

you gotta like read the main textbook

versus you can just look at the esoteric thing

how do you balance that trade off?

Well I haven't interviewed that many scientists

like Ed Boyden would be one

Richard Prom the ornithologist from Yale

those are very hard preps

I think those are two excellent episodes

but I'm limited in how many I can do by my own ability to prepare. I like the most doing historians because the prep is a lot of work, but it's easy, fun work for me.
And I know I always learn something. So now I'm prepping for Stephen Kotkin, who's an expert on Stalin and Soviet Russia.

And that's been a blast.

I've been doing that for like four months,

reading dozens of books.

And it's very automatic,

where if you try to figure out

what Ed Boyden is doing with the light

shining into the brain,

it's like, oh my goodness,

do I understand this at all?

Or am I like the guy who thinks

the demand curve slopes upward?

So it just means I'm only going to do a smallish number of scientists, and that's a shame, but maybe AI can fill in for us there. You recommended a book to me, Stalin's Library, which talks about the different things, the different books that Stalin read, and the fact that he was kind of a smart, well-read guy.
And the book also mentioned, I think, in the early chapters that, look, he never, in all his annotations, if you look through all his books, there's never anything that even hints that he doubted Marxism. That's right.
There's a lot of other evidence that that's the correct portrait. What's going, like, smart guy who's read all this literature, all these different things, never even questions Marxism.
What's going on there? What do you think? I think the culture he came out of had a lot of dogmatism to begin with. And I mean both Leninism, which is extremely dogmatic.
You know, Lenin was his mentor, like Patrick's thing about the Nobel laureates. It happens in insidious ways, too.
So, you know, Lenin is the mentor of Stalin. Soviet culture, communist culture, and then Georgian culture, which appealing and fun-loving and wine-drinking and dance-heavy as it is, there's something about it that's a little, you know, you pound the fist down and you tell people over the table how things are.
He had all those stacked vertically, and then we got this bad genetic luck of the draw on Stalin, and it turned out obviously pretty terrible. And then you buy Hayek's explanation that the reason he rose to the top is just because the most atrocious people win in autocracies? What is that explanation missing? I think what Hayek said is subtler than that.
And I wouldn't say it's Hayek's explanation. I would say Hayek pinpointed one factor.
There are quite a few autocracies in the world today where the worst people have not risen to the top. UAE would be, I think, the most obvious example.
I've been there. As far as I can tell, they're doing a great job running the country.
There are things they do that are nasty and unpleasant. I would be delighted if they could evolve into a democracy.
But the worst people are not running UAE. This I'm quite sure of.
So it's a tendency.

There are other forces, but culture really matters.

Hayek is writing about a very specific place and time.

I would say it really surprised me.

There are these family-based Gulf monarchies with very large clannish interest groups of thousands of people

that have proven more stable and more meritocratic than I ever would have dreamed,

say, in 1980. And I know I don't understand it, but I just see it in the data.
It's not just UAE.

There's a bunch of countries over there that have outperformed my expectations,

and they all have this broadly common system.

Let me ask you a question. When you go around the world, because I know you go outside the

Bay Area and the East Coast as well, and you talk about progress studies-related related ideas what's the biggest difference in how they're received versus the audience here well the audience here is so so different yeah like you're the outlier place of america and then where i normally am outside of washington dc that's like the other outlier place and in a way we're opposite outliers i think that's healthy for me, both where I live and that I come here a lot and that I travel a lot. But you all are so like out there in what you believe.
I'm not sure where to start. You all, you know, you come pretty close to thinking in terms of infinities on the creative side and the destructive side.
And no one in Washington thinks in terms of infinities. They think at the margin.
And like overall, I think they're much wiser than the people here. But I also know if everyone or even more people thought like the DC people, like our world would end, we wouldn't have growth.
They're terrible. People in the EU are super wise.
Like you have a meal with like some sort of French person who works in Brussels. It's very impressive.
They're cultured. They have wonderful taste.
They understand all these different countries. They know something about Chinese porcelain.
And if like you lived in a world ruled by them, the growth rate would be negative 1%. So there's some way in which all these things have to balance.
I think U.S. has done a marvelous job at that, and we need to preserve that.
What I see happening, UK used to do a great job at it. UK, somehow the balance is out of whack, and you have too many non-growth-oriented people in the cultural mix.
The way you describe this French person you're having dinner with. Which I've had.
We have dinners, yeah. And the food is good, too.
I don't know, it kind of reminds me of you, in the sense that you're also well-cultured and you all these different esoteric things. I don't know, what's the difference between you and, what's like the biggest difference between you and these French people you have dinner with? I don't think I'm well-cultured would be one difference.
There are many differences. First, I'm an American.
I'm a regional thinker. I'm from New Jersey.
So I'm essentially a barbarian, not a cultured person. I have a veneer of culture that comes from having collected a lot of information.
So I'll know more about culture than a lot of people. And that can be mistaken for being well-cultured, but it's really quite different.
It's like a barbarian's approach to culture. It's like a very autistic approach to being cultured

and should be seen as such.

So I feel the French person is very foreign from me,

and there's something about America

they might find strange or repellent,

and I'm just so used to it.

I see intellectually how many areas we fall flat on

or are destructive,

but it doesn't bother me that much because I'm so used to it. What is most misunderstood about autism? Well, if you look at the formal definition, it's all about deficits that people have, right? Now, if you define it that way, like no one here is autistic.
If you define it some other way, which maybe we haven't been down yet, like a third of you here are autistic. I don't insist on owning the definition.
I think it's a bad word. It's like libertarian.
I would gladly give it away. But there is like some coherent definition where a third of you here probably would qualify.
And this other definition where none of you would. And it's like kids in mental homes banging their head against the wall.
So I don't know, it seems that whole issue needs this huge reboot. These DC people, you know, one frustration that tech people have is that they have very little influence, it seems, in Washington compared to how big that industry is.
And industries that are much smaller will have much greater sway in Washington. Why is tech so bad at having influence in Washington? Well, I think you're getting a lot more influence than maybe you realize quickly through national security reasons.
So the feds have not stopped the development of AI, whatever you think they should or should not do. It's basically preceded.
And national security as a lobby, they don't care about tech per se, but it is meant that on a whole bunch of things in the future, you will get your way a bit more than you might be expecting. But a key problem you have is so much of it is in one area, and it's also an area where there's a dominant political party.
Even within that political party, there's in many parts of California a dominant faction. And you compare yourself to like the community bankers who are in like so many American counties, have connections to every single person in the House of Representatives.
Your issues, in a way, are not very partisan. The distortions you cause through your privileges are invisible to America.
It's not like Facebook, where some John Haidt has written some best-selling book complaining about what it is you do. There's not a best-selling book complaining about the community banks.
And they are like ruthless and powerful and get their way. And I'm not going to tangle with them.
And you all here are so far from that, in part because you're dynamic and you're clustered. Final question.
So I think based on yesterday's session, it seems like Patrick's main misgiving with progress is that you look at the younger cohort. There's something that's not going great with them.
And it seems to, you know, you would hope that over time progress means that people are getting better and better over time. If you buy his view of what's happening with young people.
What's your main misgiving about progress? The thing where you're like, ah, if I look at the time series data, I'm not sure I like where this trend is going. Well, our main concern always should be war.
And I don't have any grand theory of what causes war or if such a theory ever is possible. But I do observe in history that when new technologies come along, they are turned into instruments of war.
And some terrible things happen. You saw this in 17th century England.
You saw this with electricity and the machine gun. Nuclear weapons is a story and process.
And I'm not sure that's ever going away. So my main concern with progress is progress and war interact.
And it can be in good ways, like the world, a la Stephen Pinker, has had relative peace. That's fraying at the edges in the data.
The numbers are now moving the wrong way, but it's still way better than most past time periods.

And we'll have to see where that goes.

But there might be a ratchet effect

where wars become more destructive.

And even if they're more rare,

when they come, each one's a real doozy.

And whether we really are or ever can be ready for that,

I'm just not sure.

And thank you very much, Dwarkesh.

This will be the second session.

We have to end on a pessimistic note.

No, the optimistic note is that we're here.

Human agency matters.

If we were all sitting around in year 1000,

we never could have imagined the world being anything like this,

even a much poorer country.

And, you know, it's up to us to take this extraordinary

and valuable heritage and do some more with it.

And that's why we're here. So I say let's give it a go.
Great note to end up. Thanks, Tyler.
I'm very grateful to the Roots of Progress Institute for hosting this progress conference at which I got a chance to chat with Tyler and ask him a few fun questions. Jason, Heike, and the whole team did a wonderful job organizing it,

and it was a blast.

And Freethink Media did a great job

with the videography, as you can see.

If you enjoyed this episode,

please subscribe, please like,

please share it, and send it to your friends

who you think might enjoy it.

And otherwise, I guess I'll see you on the next one.

All right, cheers.