Are We Building AI for Progress or Power? — ft. Daron Acemoglu

55m
Ed Elson and Scott Galloway are joined by Nobel Prize–winning economist and MIT Professor Daron Acemoglu to discuss the economic consequences of AI. He breaks down his research on why nations fail, shares his biggest concerns about America’s future, and offers advice for the next generation of scholars.

Subscribe to the Prof G Markets newsletter

Order "The Algebra of Wealth" out now

Subscribe to No Mercy / No Malice

Follow Prof G Markets on Instagram

Follow Scott on Instagram

Follow Ed on Instagram and X
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Press play and read along

Runtime: 55m

Transcript

Support for the show comes from Anthropic, the team behind Claude.

When you're analyzing market trends or trying to understand what's really driving economic shifts, you need more than surface-level takes.

Meet Claude, the AI thinking partner that works through complexity with you.

Whether you're dissecting earnings reports or exploring the ripple effect of policy changes, Claude helps you dig deeper into the analysis that matters.

Try Claude for free at cloud.ai/slash propgmarkets and see why the world's best problem solvers choose Claude as their thinking partner.

To remind you that 60% of sales on Amazon come from independent sellers, here's Scott from String Joy. Hey, y'all, we make guitar strings right here in Nashville, Tennessee.

Scott grows his business through Amazon. They pick up, store, and deliver his products all across the country.
I love how musicians everywhere can rock out with our guitar strings.

One, two, three, four.

Rock on, Scott. Shop small business like mine on Amazon.

Support for this show comes from strawberry.me. Be honest, are you happy with your job? Or are you stuck in one you've outgrown? Or never wanted in the first place?

Sure, you can probably list the reasons for staying, but are they actually just excuses for not leaving? Let a career coach from strawberry.me help you get unstuck.

Discover the benefits of having a dedicated career coach in your corner. Go to strawberry.me slash unstuck to claim a special offer.
Today is number 43.

That's a percentage decrease in peanut allergies over the last decade. Ed.
True story, I'm pretty sure I had a nut allergy when I was a kid. My parents thought I was trying to avoid church.

Listen to me. Markets are bigger than us.
What you have here is a structural change in the wealth distribution. Cash is trash.
Stocks look pretty attractive. Something's going to break.

Forget about it. Oh my God.
That's pretty bad, right? I can see it coming a mile away

in all transparency. Brings up an interesting question, Scott.
Do you have any allergies? No. I literally am.
As a matter of fact, I can't. I'm so sick of

waiters, are I think legally mandated in the UK? I don't know if they are in the US to ask you, does anyone have any allergies? Is that right? Yeah, they have to ask you, do you have any allergies?

I'm like, bad service?

I've had had it with the service in London. You are so spoiled in New York.
The service is so good. It's true.
What are some of your horror stories? Nothing horror.

I mean, you know, I don't, it takes them more than three minutes to get me my second makers and ginger. Those are my horror stories.

My worst days are better than most people's best days, but it's not.

I go to this place called, since Chiltern burned down, I got to go to this place called Kensington Roof Gardens, which has way too many fucking dudes and it's way too crowded. Beautiful place, though.

I was there over the summer. It's unbelievable.
Yeah, it's great for the 11 days a year.

You can actually go into the roof garden, but otherwise we're all crammed inside wishing the children was still open.

And

there's, I'm not exaggerating, there's 700 people lining up at the bar, 695 of them men.

And you'd think, and there's like two bartenders sitting there like slicing limes very methodically and elegantly.

And Jesus Christ, dude, just start like literally spraying beers at us and we'll open our mouths. That's the thing about, that's freaked me me out about the members' club.

You're a member of Shea Margot, because now that we're paying it. I'm not a member of that club.
You're not? I keep seeing you there. Yeah, I get invited by members.

Well, anyways, the thing about it is that freaked me out. I'm like, it's so young here.
And then I realized it's not, the reason I think it's so young is, you know what it has?

Most of these clubs have guys in their 40s and women in their 20s and 30s. And what Shea Margot has is it has a bunch of dudes in their 20s, which makes it feel like high school.

I got to agree with you. Surely, surely, surely that's a, you're not very happy about that.
I'm fine with that. I like it.
They all come up to me. I love your content on man.
Hey, what's Ed like?

And I literally, were you student body president or the mascot or something? Everybody knows you.

And it totally bums me out because I like to get a little fucked up and talk to everyone at the bar, throw an unlit cigarette in my mouth, put on sunglasses and just be a cliche of me.

And then when people come up and say they know you, I feel as if I have to act semi-respectable. I'm like, oh, hey, hi, yeah.
Oh, yeah. No, I don't know what to do.

What do you think is going to be the next interest rate cut?

Good.

Good. They're all so funny.
This is the trouble. You shouldn't have been a professor.
You clearly, you should have been like an actor or like a pop star or something. That's what you really want.

100%. I'm just not that talented, but I agree with you.
No, I think you are. You're just, you're talented in the thing.
You've got a good brain. You've got a good economic brain.

And really, what you want to be doing is like, you've got the Bezos thing.

You want to be like Justin Bieber or something. You had me till the Justin Bieber part.

I would have gone for Ryan Reynolds or George.

George Clooney. He seems smart and very politically oriented.
But

the thing I hate about your friends, that's a good way to start a sentence, is they're so

annoyingly earnest.

Professor Galloway, it's so nice to meet you. I know Ed.
He's such a good person, isn't he? Fuck off.

Anyways, they're so...

Anyway, but it's great to meet your friends out. I've got to learn about who these friends are.
I haven't been there. You're Princeton buddies.

What is it, like a reading club or a drinking club? Like you guys, you guys have Pim's cups or something and play cricket. Eating, yeah.
Is that what it is? Eating club, that's right.

So in soda fraternities, they have eating clubs? They actually have both. Oh, really? Yeah, that is what it's called.
It's called an eating club.

But you were in an eating club, and you would get into anyone because of that faux British accent, which, by by the way, everyone is figuring out is total bullshit. They're on to me.

You're from Nashville.

All right, and enough of this. Get to the headlines.
One of our douchiests. Okay, here is our conversation with Daron Asimoglu, Nobel Prize-winning economist, New York Times best-selling author.

This is not an easy segue. This is not an easy segue.

Oh, God. Nobel Prize-winning economist, New York Times best-selling author and professor of economics at MIT.
Professor Asimoglu, very good to have you on the show. Thanks, Ed.

A great week with you and Scott.

So, we want to start with AI for obvious reasons.

That's all we can really talk about at the moment. We've been seeing a lot of

bullish sentiment in AI. We've also been seeing a lot of circular investments happening, a lot of people saying that it's a bubble.

Last year, you were asked to answer on a scale of one to 10, how much of an impact AI is going to have on the world. And your answer at the time was negative 6.

So

we want to unpack this, starting with that hot take.

What are your views on AI at this point? And do you still have a negative six rating on AI right now? Let's break that into

three pieces. One is,

is AI a transformative technology with great capabilities? Yes. That's why it's not minus one or plus one.
It's minus six.

Second, how quickly will this technology reach fruition? And there, I think the answer is it depends on how quickly it's pushed.

So I think for a positive development path, I think we need a more deliberative approach.

We are rushing into AI in a way that I think makes applications using AI less likely to develop because we are just doing it too quickly.

And we also don't have a roadmap of what it is that we really want from AI while we are, everybody recognizes that this is a technology that's going to have tremendous number of side effects, foreseen and unforeseen consequences.

So all of these things, plus

Very importantly for me, the focus on automation and AGI, while there are better things to do with AI, I think tip tip the scales towards negative.

So it's minus six, minus five, minus seven, take your pick. But I'm very worried about the direction of AI, where it's much more concentrated.

Who uses, who controls information, and what we do with it. Yeah, could you break down what that negative impact would actually look like?

There's the aspect of the concentration of power, and I'm sure that could have many implications. We're also worried about it.

There is the implication of automation and the idea that this would replace labor, it would replace people's jobs. Say more about how it goes from negative one to negative six in your view.

What is that destructive impact that you're so worried about? In the production domain, if we use AI

mostly for automation,

I think not only would we be missing some of the really transformative uses of it,

but we would create much smaller productivity gains than expected, and we would also create various social outcomes that are negative related to

job loss for certain groups,

lack of employment opportunities for certain groups, wage stagnation or declines like we've experienced in the 1990s.

All of these are on the negative part related to the production process. But I'm also very worried about

the fact that AI is first and foremost a communication technology.

And as a communication technology, it changes political and social dynamics.

And when it centralizes information in the hands of a few companies,

it can have a variety of very negative effects on democracy, on dissent, on

diversity, diversity, variation in opinion, all sorts of things that we are really not prepared for.

What would you say to someone who would say that you are being a Luddite? Or, I mean, we've seen transformative technologies in the past, whether it be oil or the electric grid or the internet.

And it seems as though when these transformative technologies come along, there is a lot of concern about what it will do to our economy and how it will negatively impact our economy.

But many people would say, well, eventually it works out and we shouldn't be so worried about technology. What do you say to those people?

Well, there are really two theories that people could have about

long-run effects of technologies in general.

A first one is that

in the long run, things will work out by themselves. Let it rip.

And

just the dynamics of things, for example, in the labor market or via democratic processes, sometimes semi-democratic processes, is that we'll just find the right way of dealing with things.

The second one is that, no, it's a deliberative set of choices that we have to make in order to

make sure that the long-run effects are better than the short-run ones. And I think if you look at several transformative technologies, indeed they did have fairly negative short-run effects.

The beginning of the British Industrial Revolution is associated, depending on how you measure it, 70, 80, 90 years of real wage declines or stagnation and

huge increases in inequality. The transition out of agriculture similarly at first created quite a lot of social and economic hardship.
But in both cases, later adaptation worked out much better.

But I would would say, even though decisions were made without a roadmap, there were specific decisions about how to use technology, how to change the organization of production, and also political decisions that were quite important.

And so it's not an automatic process. So I definitely, I'm not an AI pessimist.
And so I don't know what your definition of Luddite is, but I'm not an AI pessimist.

But I do not believe that we're going to get the best out of AI or even the second best out of AI if we just say, oh, let's not worry about all the disruptions. Somehow things are going to work out.

So it feels as if a lot of the concern or discussion around AI has moved towards who can secure

reliable, large amounts of electrons or energy.

One, is this an attempt? It sounds like, do you think this is an attempt to paint a future where the demand is going to be unlimited and a bit of a head fake?

Or do you see the same sort of power constraints that these guys are seeing? And what do you see as kind of the downstream impact of that? I do not believe that power, GPU capacity is the main

limiting factor for the kind of AI that I have in mind.

Because

if we're going to make AI really serve our needs,

I think it needs to have much more human-complementary domain-specific

expertise. It has to be able to be an aid to electricians, to accountants, to journalists, to academics.

And for that, high quality domain specific information data is going to be the real scarce resource.

Whereas right now, the energy demand is mostly from foundation models that are very impressive in some ways, but also very, very expensive to run and have not reliably reached that context-specific domain-relevant expertise.

So I think we have to find a way of getting the best out of foundation models, but combine them with domain-specific models.

What I've also seen is that a lot of the different LLMs are hitting sort of a technical parity

by most kind of, I don't know, hardcore metrics. Do you think there's a scenario? We had Robert Armstrong from the FT on several times, and he said that there are certain technologies where no one

or small set of companies is able to ring fence

stakeholder and therefore shareholder value. The airlines, the PCs, vaccines were huge innovations, but you didn't have a small number of companies garnering trillions of dollars in market cap.

Do you think this might qualify as one of those industries where even if it ends up being having a huge impact on society, that we're overestimating the ability of a small number of companies to capture a ton of shareholder value?

I would be very worried about an industry that is so concentrated. On the other hand, it's a cutthroat industry.

And

we don't understand what sort of industrial organization of AI is going to emerge. So it is almost certain that

whoever is doing the foundation models advances is not going to be the same one, same company that also does all of the applications. So you're going to form an AI stack.

So, once you form that AI stack, where are the risks going to reside? And which part, which layer of the stack is going to get most of the returns? I think that remains to be seen.

It's going to depend on where the real bottleneck is in terms of doing useful things and whether the foundation models are close substitutes for each other when it comes to serving as the first layer of that stack.

I think those are really interesting questions. And it is made more interesting by the fact that

industries that look very competitive at some point later on can be very non-competitive because early competition is about being the one that controls things later on. That's why it's so vicious.

Because everybody thinks that they're going to get the prize and the prize is dominate the industry.

So I don't know whether the competition that you're seeing right now is going to repeat itself in 10 years' time or whether we're going to go to a winner takes all sort of structure at the foundation layer.

Yeah, in your book, Power and Progress, you basically take us through history and the history of technological progress.

And you make the point that technology has not necessarily been as beneficial as we think of it because of how it has been distributed throughout societies, which seems extremely prevalent to what we're likely about to see with AI.

Take us through that history.

What is your reasoning for that? How does that play out?

Yeah, I think there are essentially several reasons why the full potential of a suite of technologies may not be realized or may not be realized quickly. One is monopoly.

If one or a few companies dominate everything

and they use that in order to extract all the rents, but also slow down innovation. That's one recipe for many good things not happening.

So, today, I think the digital world or communication world would be very different if ATT had remained

like the sole monopoly. So, the breakup there probably opened the field for more shake-up.

The second is, you know,

whether that technology is working with labor or is sort of just replacing labor and worse, like during many parts of human history, it's becoming a tool for repressing labor.

Those don't work out great. I mean, slavery was not a very efficient system.
It wasn't just bad for the coarse people, but it wasn't actually generating economic dynamism.

At the time of the Civil War, the U.S.

South was falling further and further behind, while the number of patents, innovations, industrial production, and all sorts of other things were advancing rapidly in other parts of the United States.

So I think all of these things we have to sort of take into account.

Is AI going to be a monitoring technology where workers become more and more powerless because there's so much data being collected about them? That's another concern that we don't often talk about.

So there are many issues here that are intersecting and we see parallels for each one of them in history. None of those are perfect parallels.

We've never been confronted with a technology that's so widespread in its potential applications. But we've had other technologies that are quite transformative as well.

So how do we harness AI in a good way? I mean, what does regulation look like, in your view, such that

it is a net benefit to society versus a negative six?

First of all, I think we need to be

thinking about what it is that we want from AI.

Of course, not everybody's going to agree on that, but at least that conversation needs to be had in a more open, constructive way.

And second, we also need to be clear about what we mean by regulation. I think what most people mean by regulation is a reactive kind of regulation.

Something happens, AI companies do something, and then we are worried about certain additional harms, and then we regulate that. I think instead what we want is something more proactive.

Let's think about where it is that AI can do most

good

and ask ourselves whether it is going in that direction and what are the impediments for it not to go in that direction and see whether we can do certain things to facilitate that.

So if we did not do those facilitators, we would not have the internet because

government support there was quite important. We would not have

renewable technologies that are now, at least in certain applications,

cost competitive with fossil fuels.

So it's not like we have a

sort of a law that says the market, or in particular, a few companies that are steering technology are necessarily going to choose the right paradigm.

I think the market is excellent in doing certain things,

but

market participants are also locked in a particular type of business model often.

They are going after a particular kind of prize, and it is possible to step back and say, well, is there another prize that we should be focusing on?

And that's what I'm arguing: that human complementary AI, where we try to augment human capabilities, expand human capabilities, could have real benefit.

And that's not the direction in which we're going.

We'll be right back after the break. and if you're enjoying the show so far be sure to give Prof GMarkets a follow wherever you get your podcasts.

Support for the show comes from Anthropic, the team behind Cloud.

When analyzing complex market movements or policy implications, the difference between surface-level commentary and real insight comes down to asking better questions about the data.

Cloud is for AI for minds that don't stop at good enough. It's a collaborator that actually understands your entire workflow and thinks with you, not for you.

Whether you're strategizing your next business move or diving deep into economic analysis, Cloud extends your thinking to tackle the problems that matter.

For finance professionals exploring earnings reports or economic policy ripple effects, Cloud works through complexity rather than rushing to conclusions.

It helps spot connections across multiple sources, challenge assumptions, and develop insights that go beyond the obvious.

Try Cloud for free at cloud.ai/slash profgymarkets and see why the world's best problem solvers choose Claude as their thinking partner.

Support for the show comes from Workday, the to-do list of a small business leader. Close the books, get your people paid, and bring on new hires.

Look, running a small or mid-sized business can be exciting, but it can also be chaotic. That's where Workday comes in.
Workday Go makes simplifying your business a whole lot simpler.

Imagine this, the important aspects of your company, HR and Finance, all on one AI platform. No more juggling multiple systems, no more worrying about growing too fast.

Just the full power of Workday helping small to mid-sized businesses like yours run more smoothly. And Workday Go activates quickly.
You can be up and running in 30 to 60 business days.

So, simplify your business. Go for growth.
Go with Workday Go. Visit workday.com/slash go to learn more.

Adobe Acrobat Studio, so brand new. Show me all the things PDFs can do.
Do your work with ease and speed. PDF spaces is all you need.
Do hours of research in an instant.

With key insights from an AI assistant. Pick a template with a click.
Now your prezzo looks super slick. Close that deal, yeah, you won.
Do that, doing that, did that, done.

Now you can do that, do that with Acrobat. Now you can do that, do that with the all-new Acrobat.
It's time to do your best work with the all-new Adobe Acrobat Studio.

We're back with Prof G Markets. Do you think the U.S.
is going to be able to maintain, I mean, other than DeepSeek,

it's just very difficult to think of another AI player of almost any importance globally outside of the U.S.

Do you think that the U.S. is going to be able to maintain that type of lead in the AI ecosystem? China has an engineering advantage.

They have a huge number of engineers. They're generally well selected, meaning that more

talented, quantitatively sort of skilled people enter enter into engineering because it's a very prestigious thing and they have exams that are relatively unbiased and engineers are highly regarded.

So when it comes to pure engineering things, I think China could have an advantage. On the other hand, the top-down system is hugely inefficient.

There are so many places where inefficiencies build up. People are afraid of taking initiative.

There is no decentralized sort of process. There, the U.S.
has an advantage. How that will shake out at the end would really depend.
DeepSeek is an engineering marvel.

They didn't come up with any of the new methods. The new methods that DeepSeek were was using, some of them were invented by Google and OpenAI.

Many of them were invented 20 years before by machine learning scientists. So they just took them, but they combined them.
quite well. How many more times can they do that?

That's going to be one of the questions. And then, of course, it's not just US and China.
Can other countries catch up? Europe Europe is behind, clearly.

But I don't think there is a law that Europe has to be behind.

There are many talented AI scientists in Europe. They just happen to be all working in Silicon Valley.
I think this is a good segue into your 2012 book, Why Nations Fail.

We're discussing how America could fall behind in the AI race.

And it's interesting.

You're kind of highlighting that on the one hand, there are some benefits to this top-down communist structure where you can set the agenda and the tone for the nation, the tone being we love engineers and we love people who build AI.

On the other hand, it can be a problem when you don't have the competitive forces of capitalism at work.

And it seems as though this is going to be sort of the defining question of our time, which one works. 100%.

I think that's a very, very well put. Which

brings me to the question: why do nations fail? I mean, you've done Nobel Prize-winning research research on this, the role of institutions.

Why do nations fail? It's mostly institutions. There are other factors, but many of these other factors, such as civil wars, are institutional as well.
And the role of institutions,

both formal rules

but also informal arrangements and norms, they become much more important when we're dealing with sectors that are forward-looking, innovative, require small players to scale up.

All of those are things that the U.S. was doing quite well.

You know, the United States

had an ecosystem of startups that

were extremely confident. You know, people, when they open, I mean, I knew many of them, I know many of them.

Very few people think, well, if I launch a startup and if I'm successful, will I be shut down by courts? Will I be crushed by my competitors?

Will I be able to get any contracts when my competitors are favored by the government? Nobody thought about that.

Nobody thought about that, partly because I think people were on the optimistic side, but largely because U.S. institutions had a pretty good track record of not doing that.

I think we're no longer sure.

There are favored companies and not favored companies. Courts are much less impartial.

Scaling up may become much harder when there is more uncertainty. So I think there are a bunch of issues where the institutional advantage that the US economy had is more of a question mark today.

And the problem is that when you mess up certain things, you pay the price right away.

If you mess up institutions, especially as they pertain to innovations, you don't pay the price because the impact's not going to be felt for another five, ten years.

So if there are some fundamental negative effects from Trump's attack on

independent judiciary,

the costs of that will be realized not in 2027 or 2028, but probably in the 2030s. What is like the counterfactual to an institution-led society?

Like when we talk about it's the societies that have had strong institutions, those are the ones that have worked out, and that's what your research has explained to us. What is the alternative?

What is a society that is not led by institutions actually look like? What enters the void?

How does that lead to a less prosperous path? Well, I mean, I would think that every society has forms of institutions, but let's change your question to state-led versus not state-led. Okay.

So Soviet Union was state-led,

but too much state, not enough market, not enough decentralization, and that was horrible. But Somalia,

where Klans ruled for several decades after the collapse of the state, is the outer opposite. There are no state institutions that were functional.
There's no third-party enforcements.

There's anarchic ways in which even small problems would escalate. Neither of these two are great.

Probably the Somalia is worse than the Soviet Union for economic activity, although probably the Soviet Union makes up for it by killing people more effectively.

So I think that happy medium where there is enough decentralization of especially economic activity, but also other things like communication, dissent, etc.

But there are state institutions that can be leveraged for doing good things, such as supporting innovation, providing public services, defense.

I think that happy medium is hard to maintain, but many societies did maintain it for

several decades, U.S. did or longer.
Yeah, if I could sort of summarize what my takeaway from your research to be, it's something along the lines of

this question of why are institutions even good for us, is something along the lines of

they

prevent extreme concentration of power into the hands of someone who may not know what they're doing.

That to me, is the defining difference as to why there are societies that work out and there are societies that don't, as

proven with the Soviet Union. You can argue maybe even today with Russia, where the institutions have been kind of taken over by Putin and any other failed society.
Is that right?

Absolutely. And if you look at the data, you see that, for example,

economic performance under dictatorships

is not just worse than democracies, but it's also more variable.

And that reflects exactly what you're articulating, which is sometimes you're going to end up with a complete idiot as your dictator, and that's a real disaster.

But even very smart people can be very dangerous because their incentives are not aligned with the rest of society. So Stalin

didn't do a huge amount of damage because he was an idiot.

He was definitely no genius, but he did know some of the things he was doing in terms of killing people and creating an enormously powerful secret service, secret police. So,

somebody who has the wrong incentives and the wrong motivations, even when they are talented, could do a lot of damage. So, democratic system

via

checks and balances, civil society mobilization, ability to change and kick out politicians when they don't do what they're supposed to do, I think creates a lot of pathways for not falling victim to that.

Given all of that,

what do you make of what's happening in America today? What do you make of Trump's administration thus far? What do you think, what kinds of impacts do you think it will have on the economy?

I think institutions

are really the secret source for the United States.

The U.S. is one of the most innovative economies in the world, and that comes because people are fairly confident that they can do new things and succeed.

The American advantage in finance is also about institutions.

You know,

during the financial, global financial crisis, which was initiated, you know, largely speaking in the United States, what did foreign investors do? They put more of their money in the U.S. Why?

Because during a crisis, they

believe

U.S. assets, equities, corporate debt, government debt, are just much more reliable, much more liquid than the alternatives.
That's also institutional. You don't want to be subject to Chinese courts.

You want to be subject to American courts. So all of those

require a degree of independence in the judiciary and predictability and impartiality in the broader institutional rules.

Trump's agenda, which is not unique to Trump, but is an extreme version, is to build a much more executive presidency, meaning the president

has far greater power than what has been the norm, and the other branches of government and the agencies are not as powerful.

I think that comes with a serious risk that those institutional balances are going to be disrupted. And we're already seeing that in terms of

corruption, in terms of people in the administration enriching themselves.

But more importantly, I think a lot of uncertainty, for example, in the area of tariffs, what's going to happen next month to tariffs. That's the kind of uncertainty that strong institutions avoid.

If that happens, that secret source that has been so valuable to the American economy will start disappearing. We've been surprised at how well the economy, or at least the markets, have done, because

we see the same issues you do, a lack of rule of law, lack of competition, regulatory capture.

But meanwhile, it looks as if the American economy, and there are some warning signs, but we've been shocked at how well it continues to grind on. A, are you surprised? And B, do you think

that there's just a lag or

that we're not seeing kind of

the real issues here. Where is the state of the economy right now relative to what your perceptions would have been about it, given some of the concerns you've raised? All of the above.

So I think some of it is that there are lags.

Some of it is that AI optimism is masking

the problems.

Part of the reason why the economy and the stock market are booming is because there's a huge amount of AI investment.

Part of it is that, you know, tax cuts would, if you did nothing else, nothing else changed, and you just did a tax cut that favored capital,

that would lead to stock market valuations increasing. And then the stock market is, of course, the incumbents.

So if there are changes in the economy such that startups start having a harder time, that may not be great for the economy, but it's not going to be as bad for the incumbents that will be protected from the startups.

So, there are a number of layers here that I think are intersecting.

But some of it, I think, is just that with tariffs, for example, we haven't seen their full effects on prices, we haven't seen their full effects on supply chains because everything is changing so rapidly.

I'm surprised. What I'm surprised by is, I'll tell you, Scott,

is that people haven't been spooked as much by uncertainty. So the belief among economists, macroeconomists, was that uncertainty spooks investment.
That hasn't happened.

We've had a lot of uncertainty, and that hasn't really translated into people saying, well, let me hold back on my investments because I don't know what the future is going to look like.

So that may be because of AI, it may be because of other things. I don't know.
When you look at America right now,

what are your top concerns?

I mean, you mentioned that institutions are the secret source of America.

And it appears that institutions are under attack in some form or another, whether that's the BLS or whether it's the Federal Reserve.

What are your major concerns for America right now?

There are several layers of institutions that worry me, starting from the top, not in terms of

importance.

I think I would have to think a little bit harder to give you importance importance weights. But first of all, our ability to control corruption,

self-enrichment, enrichment of friends and family, those have become much weaker.

Independence of bedrock

institutions, judicial branches, that's much weakened. Like FBI,

like it or not,

and there were many things not to like about it, but it had an ethos of independence and not being political. That's gone.

There is

a network of information provision institutions from the government accounting office, Office of Budget Management, BLS, BEA, Census, those are being weakened.

So our ability to track the economy is going to be much weaker. And then, you know, very fundamentally,

because

politics is becoming more conflictual and polarized, there are concerns that

we're going to be much less successful in the future in keeping politicians accountable.

We'll be right back, and for even more markets content, sign up for our newsletter at profitmarkets.com/slash subscribe.

All right, remember: the machine knows if you're lying. First statement: Carvana will give you a real offer on your car all online.
False. True, actually.
You can sell your car in minutes. False?

That's gotta be true again. Carvana will pick up your car from your door, or you can drop it off at one of their car vending machines.
Sounds too good to be true, so true. Finally, caught on.

Nice job. Honesty isn't just their policy, it's their entire model.
Sell your car today, too. Car Vana.
Pickup fees may apply. Support for this show comes from Shopify.

Creating a successful business means you have to be on top of a lot of things. You have to have a product that people want to buy, focused brand, and savvy marketing.

But an often overlooked element in all of this is actually the business behind the business. The one that makes selling things easy.
For lots of companies, that business is Shopify.

According to their data, Shopify can help you boost conversions up 50% with their Shop Pay feature.

That basically means less people abandoning their online shopping carts and more people going through with a sale.

If you want to grow your business, your commerce platform should be built to sell wherever your customers are. Online, in store, in their feed, and everywhere in between.

Businesses that sell, sell more with Shopify. Upgrade your business and get the same checkout Mattel uses.
Sign up for your $1 per month trial period at shopify.com/slash slash Voxbusiness.

All lowercase. Go to shopify.com slash Voxbusiness.
To upgrade your selling today, Shopify.com slash Voxbusiness.

These days, every business leader is under pressure to save money, but you can't crush the competition just by cutting costs. To win, you need to spend smarter and move faster.
You need Brex.

Brex is the intelligent finance platform that breaks the trade-off between control and speed with smart corporate cards, high-yield banking, and AI-powered expense management.

Join the 30,000 companies that spend smarter and move faster with Brex.

Learn more at brex.com/slash grow.

We're back with Profit Markets. When you we talk, it seems like these discussions are always about the U.S.
versus China. I'm living in London right now, and

you're originally from Turkey.

Do you see any other nations or academic institutions that you're very hopeful on?

If you were to make a bet on an economy right now, other than the US or China, which economies would you make a bet on? Well, I think the problem is, Scott, that

in many areas,

especially in AI, scale matters.

You know, Swiss academic institutions are doing great relative to their scale, but they're never going to, Switzerland as a country is never going to be a rival by itself

to the United States or to China, just in terms of resources,

both human and financial.

So you really need European

academic system to come together. And it has some great strengths and it has many weaknesses.
It is not integrated in its best. The American system was much, much more integrated.

You know, institutions in California and Cambridge, you know, thousands of miles apart, they're still much more integrated than

those in different parts of Europe were.

And that creating a hub like Silicon Valley or other places at some point it was in Boston, that's very, very important. And you need that kind of concentration of energy.

I don't think that it's impossible for Europe to achieve that. And there are

several people who are thinking about that. For example, Mario Draghi of the European Central Bank fame

sort of recently came out with a report asking much more investment in AI and digital technologies to sort of tick start something at the European level.

You know, w what will succeed and when it's government directed, there are many risks, especially when you have the national bureaucracies overlaid with European Commission bureaucracy.

So I wouldn't bet that it's going to work out anytime soon, but I wouldn't want to bet against it either.

Also, for, you know, I think the world would be much healthier if we had a true multipolar world in terms of research, in terms of new ideas, not just US versus China, but US, China, Europe, but also something from the developing countries.

There's tremendous energy in India,

in other parts of the world, about new techs,

new ideas, new entrepreneurship, new risk-taking. I think the world would be just much richer if those countries also had their intellectual fingerprints on these advances.

We think a lot about the well-being of young men on this program.

And something that has, I won't speak for it, has me suitably freaked out, is a collision between AI and synthetic relationships that could potentially further sequester and isolate young men from relationships with their parents, diminish their desire to develop romantic relationships, establish friendships.

Are we catastrophizing here? I see this as a disaster with no guardrails. And I can't tell if it's just as I get older, Professor, I'm getting more angry and depressed, or if I see something here.

What are your thoughts about character AI, synthetic relationships?

I'm very worried, but not just for men. I mean, look, for social media, actually, the effect has been larger on women.

I think many of these technologies that completely transform social relations

have so many unforeseen consequences.

You know, despite many mistakes and misdeeds that, you know, Facebook, Meta, et cetera, Instagram did, they didn't set out to create a mental health crisis. That was a side effect.

Now, with some of these things like character AI, they're actually intending to create, completely artificial bubbles. So yes, I would definitely be worried about that.

But the only thing I have as a small

source of sort of silver lining is that

they're not going to happen overnight.

So these models are still going to be very clunky. So we still have some time.
And that's why it's so important to ask the questions, what is it that we want from AI?

Do we want, you know, character AI and all of these synthetic relations and all of this isolation? Do we want

how fast automation? Or is there something we can do with this very promising technology? What field or what specific application do you feel holds the most promise for AI?

I think by its nature, AI could be revolutionary for many fields.

Certainly, the process of science is already benefiting. We can do many more things.

I think a lot of occupations that involve interaction with the real world in a problem-solving manner can really benefit from AI.

So that's like electricians, plumbers, blue-collar workers, because they're engaged in a series of problem-solving tasks.

And AI-based, context-specific, reliable information could be a game changer there. A lot of other non-scientific creative occupations could get a boost as well.

So there are actually a lot of different things that we could do. Healthcare and education are becoming a bigger and bigger share of national income everywhere.

And those are the two sectors where we've had very little productivity gains. So macroeconomic productivity gains are held back because of these services.

If we can do anything with AI to kickstart faster productivity growth in these sectors, that would be a game changer. You've been described as the Wilt Chamberlain of economics.

You know, you are sort of a powerhouse academic,

powerhouse economist. You've written books, you've taught at MIT, you've won a Nobel Prize.
The list goes on.

I'd love to, on the subject of institutions, I'd love to get your views on just the future of academic institutions right now. They've also been under attack in a variety of ways.

How do you feel about academia as a field? Do you feel optimistic, pessimistic? What does the future of academics actually look like?

An important part of the institutional fabric of this society is also to provide foundational inputs into innovation via the academic educational process, and that's also in danger.

Look, I think there were many things that were wrong with academia before Trump,

but the current attack on the autonomy of universities cannot be justified by

those

problems.

And I think it risks being more encompassing than those problems, whether you're going to call them DEI or whatever they are, ever could have been.

Because now funding and government control are all levers

that a centralized authority can exercise over universities and academics.

And whenever that has happened in other countries, countries, for example, in my country of birth, in Turkey or in Hungary, you see the costs have been quite extreme. Yeah, how does that play out?

How do those costs materialize? Well, first of all, I think

very important

projects that are forward-looking don't get funded. You know, today it's very difficult, for example, to get funding for mRNA vaccines.

And those were going to potentially play an important role in fighting cancer and other sort of diseases for which we could

develop new vaccine-based approaches.

And that's just the tip of the iceberg.

But even more importantly, I think when the autonomy of universities starts being threatened, people become more timid in their risk-taking and that risk-taking there involves going against established ways or things that powerful actors believe.

So I mentioned earlier on, in China, the top-down structure is

really very costly. I think one of the costs you can see very clearly is in academia.

In Chinese academia, despite the fact that you have very talented people selected into it and you have high degree of engineering or other expertise, it's much more political.

So nobody wants to rock the boat, which means people are not really exploring more controversial topics, whether it is in social sciences or in physical sciences.

And that environment, once it sets in, is very difficult to reverse. One of the great things about American academia was that sort of

very risk-taking attitude, exploring things that are at the edges, you know, that could become much harder.

Just observationally or anecdotally, from my perspective, I feel like academia and pursuing a career in academia has very recently gotten kind of a bad rap.

It's almost as if academia, the expert class, there's this stuffy expert class that is either captured by wokeness or by, you know, they're trapped in the ivory tower or whatever it is.

And I see this among my friends, many of whom were going to pursue careers in academia and then decided,

no, this world rewards entrepreneurs. This world rewards people who start companies and work for big tech.
I'm wondering if you've seen that yourself.

Have you seen that there is sort of a lesser interest in pursuing these kinds of careers? I have, absolutely. And I think

some of it is real and some of it is a perception. So indeed, I think

there is the right kind of technocracy and the wrong kind of technocracy.

As the world becomes more complex and there are more

areas in which expertise, deep knowledge matters, you need technocracy, but you need technocracy to be accountable and to be sort of in conversation with the rest of community.

So in some sense, yes, the wrong kind of technocracy has emerged to some degree in many countries where

experts, whether they are in universities or bureaucracy, sometimes know it best.

But I think some of what you're describing is also an exaggerated feeling of, you know, in fact, you know, some of the things that those experts were saying are, in fact, true, for example, on vaccines or climate change.

It's just that they did not communicate it very well, and the whole thing became unnecessarily politicized. And it's not just in academia.

Look, for example, mRNA vaccines were not spearheaded by academia. They were spearheaded by startups and companies that thought they could really break the mold by doing something very different.

So it's that whole ecosystem that's being threatened right now.

It almost feels as if our culture is just so dead set on finding out who are the experts, who are the people who are telling us how it is, and can we find a way to discredit them and tell them that they're wrong.

And in a lot of ways, it seems ridiculous that to say, oh, it's the university professors that are telling us. It's like, actually, these are not the people who are in control right now.

You're being told a whole set of things.

They've never been. I mean,

they may be accused of being arrogant, that's for sure.

But that arrogance was never matched with real power.

Exactly.

Yeah, I wish more people could understand that.

My final question. Do you have any advice for young academics, for people who are interested in pursuing a career in academia, who are maybe

asking those questions that we just discussed as someone who's had so much success in the field? What would your advice be?

You know, I get very frustrated when, you know, people say, oh, you know, you should work on this topic because this topic has,

you know, potential and there might be demand here. No, I think the real secret source in academia is you should work on whatever you're passionate about.
And the best academic research comes when

it's really sort of your own passions that you are following.

And that's even more important when academia is under attack because some of the great research can be done even when budgets are slashed, because you're just committed to it, and it's not about the biggest lab, it's not about the best measurement instruments, but it's about doggedly pursuing something that you feel is right and you can prove it.

Doron Osimoglu is an institute professor at MIT and co-director of the University Center on Inequality and Shaping the Future of Work.

He is also the author of six books, including the New York Times best-selling Why Nations Fail and Power and Progress: Our Thousand Year Struggle Over Technology and Prosperity.

He was awarded the Nobel Prize in Economic Sciences in 2024 for his studies of how institutions are formed and affect prosperity.

Professor Asimogli, we really appreciate your time. And I think you're the first Nobel Prize-winning guest we've had on the podcast.
So it's a win for us, too. My pleasure, Edmund.

Thank you very much, Scott. It's great and very happy to have been able to join you guys.
It's our pleasure. Congrats again on your good work.

This episode was produced by Claire Miller and Alison Weiss and engineered by Benjamin Spencer. Our research team is Dash Lan, Isabella Kinsell, Kristen O'Donoghue and Mia Silverio.

Drew Burroughs is our technical director and Catherine Dillon is our executive producer.

Thank you for listening to Profit Markets from Profit Media. If you liked what you heard, give us a follow and join us for a fresh take on markets on Monday.

you have

in kind

reunion

as the world turns

and the hell flights

in love

When will AI finally make work easier? How about today?

Say hello to Gemini Enterprise from Google Cloud, a simple, easy-to-use platform letting any business tap the best of Google AI.

Retailers are already using AI agents to help customers reschedule deliveries all on their own. Bankers are automating millions of customer requests so they can focus on more personal service.

And nurses are getting automated reports, freeing them up for patient care. It's a new way to work.
Learn more about Gemini Enterprise at cloud.google.com.

Support for this show comes from Salesforce. Today, every team has more work to do than resources available.
But digital labor is here to help.

AgentForce, the powerful AI from Salesforce, provides a limitless workforce of AI agents for every department.

Built into your existing workflows and your trusted customer data, AgentForce can analyze, decide, and execute tasks autonomously, letting you and your employees save time and money to focus on the bigger picture, like moving your business forward.

AgentForce, what AI was meant to be. Learn more at salesforce.com slash agentforce.

Nobody knows your customers better than your team, so give them the power to make standout content with Adobe Express.

Brand kits make following design rules a breeze, and Adobe quality templates make it easy to create pro-looking flyers, social posts, presentations, and more.

You don't have to be a designer to edit campaigns, resize ads, and translate content. Anyone can in a click.
And collaboration tools put feedback right where you need it.

See how you can turn your team into a content machine with Adobe Express, the quick and easy app to create on-brand content. Learn more at adobe.com/slash express/slash business.