Semianalysis, the leading publication and

A>...">
Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works

Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works

October 02, 2024 2h 9m

A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade.

Dylan Patel runs Semianalysis, the leading publication and research firm on AI hardware. Jon Y runs Asianometry, the world’s best YouTube channel on semiconductors and business history.

* What Xi would do if he became scaling pilled

* $ 1T+ in datacenter buildout by end of decade

Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Sponsors:

* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for FPGA programmers, CUDA programmers, and ML researchers. To learn more about their full time roles, internship, tech podcast, and upcoming Kaggle competition, go here.

* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.

If you’re interested in advertising on the podcast, check out this page.

Timestamps

00:00:00 – Xi's path to AGI

00:04:20 – Liang Mong Song

00:08:25 – How semiconductors get better

00:11:16 – China can centralize compute

00:18:50 – Export controls & sanctions

00:32:51 – Huawei's intense culture

00:38:51 – Why the semiconductor industry is so stratified

00:40:58 – N2 should not exist

00:45:53 – Taiwan invasion hypothetical

00:49:21 – Mind-boggling complexity of semiconductors

00:59:13 – Chip architecture design

01:04:36 – Architectures lead to different AI models? China vs. US

01:10:12 – Being head of compute at an AI lab

01:16:24 – Scaling costs and power demand

01:37:05 – Are we financing an AI bubble?

01:50:20 – Starting Asianometry and SemiAnalysis

02:06:10 – Opportunities in the semiconductor stack



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Listen and Follow Along

Full Transcript

Today I'm chatting with Dylan Patel, who runs Semi-Analysis, and John, who runs the Asianometry YouTube channel.

Does he have a last name?

No, I do not. No, I'm just kidding.
I'm John Y.

What's right, is it?

I'm John Y.

Wait, why is it only one letter?

Because Y is the best letter.

Why is your face covered?

Why not?

No, seriously, why is it covered?

Because I'm afraid of looking at myself get older and fatter over the years but but so seriously it's like anonymity right anonymity okay yeah by the way so did you know what dylan's middle name is actually no i don't know he told me but what's my father's name i'm not gonna say but i remember you can say you can say it it's fine Sanjay? Yes, what's his middle name? Sanjay? That's right. Wow.

So I'm the going to say it, but I remember. You can say it.
It's fine. Sanjay? Yes.
What's his middle name? Sanjay? That's right. Wow.
So I'm Dwarakash Sanjay Patel. He's Dylan Sanjay Patel.
It's like literally my white name. Wow.
It's unfortunate my parents decided between my older brother and me to give me a white name. It could have been Dwarakash Sanjay.
You know how amazing it would have been if we had the same name?

Like butterfly effect and all.

We probably would have all turned out the same way.

Maybe it would have been even closer.

We would have met each other sooner, you know?

Who else is named Dwarke Ssang. Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang.

Dwarke Ssang. Thank you.
Don't answer that question, John. That's bad for AI safety.
I would basically be contacting every foreigner. I would be contacting every Chinese national with family back home and saying, I want information.
I want to know your recipes. I want to know, I want contacts.
What kind of like AI lab foreigners or hardware foreigners? Honeypotting open AI? I would basically, like, this is totally off cycle, but like, this is off the reservation. But like, I was doing a video about Yugoslavia's nuclear program.
Nuclear weapons program. Started absolutely nothing.
One guy from Paris. And then one guy in Paris, he showed up and he was like, and then he had, who knows what he did.
He knows a little bit about making atomic nuclear weapons. But like he was like, okay, well, do I need help? And then the state secret police is like, I will get you everything.
And then like, I shouldn't do that. I was getting you everything.
And for like a span of four years, they basically, they drew up a list. What do you need? What do you want? What are you going to do? What is it going to be for? And they just, state police just got everything.
If I was running a country and I needed catch up on that, that's the sort of thing that I would be doing. So, okay, let's talk about the espionage.
So, what is the most valuable piece of, if you could have this blueprint, like this one megabyte of information, do you want it from TSMC? Do you want it from NVIDIA? Do you want it from OpenAI? What is the first thing you would try to steal? I mean, I guess you have to stack every layer, right? And I think the beautiful thing about AI is because it's growing so freaking fast, every layer is being stressed to some incredible degree. Of course, China has been hacking ASML for over five years, and ASML is kind of like, oh, it's fine.
The Dutch government's really pissed off, but it's fine, right? I think they already have those files have those files, right, in my view. It's just a, it's a very difficult thing to build, right? I think the same applies for like fab recipes, right? They can poach Taiwanese nationals very like, not that difficult, right? Because TSMC employees do not make absurd amounts of money.
You can just poach them and give them a much better life. And they have, right? A lot of SMIC's employees are TSMC, you know, Taiwanese nationals, right? A lot of the really good ones, high up ones especially, right? And then you go up like the next layers of the stack and it's like, I think, yeah, of course there's tons of model secrets.
But then like, you know, how many of those model secrets do you not already have and you just haven't deployed or implemented, you know, organized, right? That's the one thing I would say is like China just hasn't, they clearly are still not scale-pilled in my view. So these people are, I don't know, if you could like hire them, it would probably worth a lot to you, right? Because you're building a fab that's worth tens of billions of dollars.
And this talent is like, they know a lot of shit. How often do they get poached? Do they get poached by like foreign adversaries or do they just get poached by other companies within the same industry, but in the same country? And then, yeah, well, like, why doesn't that like sort of drive up their wages? I think it's because it's very compartmentalized.
And I think like back in the 2000s, prior to TSP, before SMIC got big, it was actually much more kind of open, more flat. I think after that, there was like – after the Ammonsong and after all the Samsung issues and after all the SMIC's rise, when they literally saw – I think you should tell that story actually.
The TSMC guy that went to Samsung and SMIC and all that. I think you should tell that story.
There are two stories. There's a guy, he ran a semiconductor company in Taiwan called Worldwide Semiconductor.
And this guy, Richard Chang, was very religious. I mean, all the TSMC people are pretty religious.
But like, he in particular was very fervent and he wanted to bring religion to China. So after he sold his company to TSMC, huge Cooper TSMC, he worked there for about eight or nine months and he was like, all right, I'll go to China.
Because back then, the relations between China and Taiwan were much more different. And so he goes over there.
Shanghai says, we'll give you a bunch of money. And then Richard Chang basically recruits half of like a whole bunch.
It's like a conga line of like Taiwanese line. Just like they get on the plane, they're flying over.
And generally, that's actually a lot of like acceleration points within China's semiconductor industry. It's from talent flowing from Taiwan.
And then the second thing was like Liang Mong Song. Liang Mong Song was a, is a nut.
And I've met him, I've not met him. I've met people who work with him and they say he is a nut.
He is probably on the spectrum and he does not care about people. He does not care about business.
He does not care about anything. He wants to take it to the limit.
The only thing. That's the only thing he cares about.
He worked from TSMC. Literal genius.
300 patents or whatever. 285.
Goes, works all the way to like the top, top tier. And then one day he decides he loses out on some sort of power game within TSMC and gets demoted.
And he was like head of R&D, right, or something? He was like one of the top R&D. He was like second or third place.
And it was for the head of R&D position, basically. Correct.
More of the head of R&D position. He's like, I can't deal with this.
And he goes to Samsung and he steals a whole bunch of talent from TSMC. Literally, again, conga line goes and just emails people, say, we will pay.
At some point, some of these people were getting paid more than the samsung chairman which not really comparable but like you know what i mean so they're going the samsung chairman usually like like part of the family that owns samsung correct him okay yeah so it's like kind of relevant so it's a bit like he goes over there and he's like well i'm like we will make samsung into this monster we forget everything forget all of the stuff you've been trying to do, like incremental. Toss that out.
We are going to the leading edge, and that is it. They go to the leading edge.
The guys like— They win Apple's business. They win Apple's business.
They win it back from TSMC. Did they win it back from TSMC? They had a portion of the— They had a big portion of it.
And then TSMC, Morris Tang, is like, at this time, was running the company. And he's like, I'm not letting this happen.
Because that guy, toxic to work for as well, but also goddamn brilliant. And also, like, very good at motivating people.
He's like, we will work literally day or night. Sets up what is called the Nightingale Army, you have they split a bunch of people and they say

you are working r&d night shift there is no rest at the tsmc fab you will go in there as you go in there'll be a day shift going out they called it the it's like you're burning your liver because in taiwan they said like if you get old like as you work you you're sacrificing your liver They called it the liver buster.

So they basically did this nightingale armory for like a year or two years. If you get old, like, as you work, you're sacrificing your liver.
They call it the liver buster.

So they basically did this Nightingale Army for, like, a year or two years.

They finished FinFET.

They basically just blow away Samsung.

And at the same time, they sue Nguyen Mong Song directly for stealing trade secrets.

Samsung basically separates from Nguyen Mong Song, and Nguyen Mong Song goes to SMIC.

And so Samsung, like, at one point was better than TSMC.

Mm-hmm. trade secrets.
Samsung basically separates from Neil Mong Song and Neil Mong Song goes to SMIC. And so Samsung, like at one point was better than TSMC.
And then, yeah, he goes to SMIC and SMIC is now better than, well, or not better, but they caught up rapidly as well after. Very rapid.
That guy's a genius. That guy's a genius.
I mean, I don't even know what to say about him. He's like 78 and he's like beyond brilliant, does not care about people.
What, yeah, what is research to make the next process node look like? Is it just a matter of like 100 researchers go in, they do like the next N plus one, then the next morning, the next 100 researchers go in? It's experiments. They have a recipe and what they do.
Every recipe, a TSMC recipe, is the culmination of a long, long years of research, right? It's highly secret. And the idea is that what you're going to do is that you go, you look at one particular part of it and you say, experiment, run an experiment.
Is it better? Is it not? Is it better or not? Kind of a thing like that. You're basically, it's a multivariable problem that each, every single tool, sequentially you're processing the whole thing, you turn up knobs up and down on every single tool.
You can increase the pressure on this one specific deposition tool. And what are you trying to measure? Does it increase yield? It's yield, it's performance, it's power.
It's not just better or worse. It's a multivariable search space.
And what do these people know such that they can do this? Is it they understand the chemistry and physics? So it's a lot of intuition, but yeah, it's, it's PhDs in chemistry, PhDs in physics, PhDs in, uh, EE. Brilliant geniuses people.
And they all just, and they don't even know about like the end chip a lot of times. It's like, oh, I am an etch engineer.
And all I focus on is how hydrogen fluoride etches this, right? And that's all I know.

And, like, if I do it at different pressures, if I do it at different temperatures, if I do it with a slightly different recipe of chemicals, it changes everything.

I remember, like, someone told me this when I was speaking.

Like, how did America lose the ability to do this sort of thing, like etch and hydrofluoric and acid and all of that? I told them, like, he told me basically it was, like, it's very apprentice, master apprentice., you know, in Star Wars, Sith, there's only one, right? Master apprentice, master apprentice. It used to be that there is a master, there's apprentice, and they pass on this secret knowledge.
This guy knows nothing but etch, nothing but etch. Over time, the apprentices stopped coming.
And then in the end, the apprentices moved to Taiwan. And that's the same way it's still run.
Like you have the NTU and NTHU, Tsinghua University, National Tsinghua University. There's a bunch of masters.
They teach apprentices and they just pass this secret knowledge down. Who are the most AGI-filled people in the supply chain? Is there anybody? I got to have my phone call with Colette right now.
Okay, go for it. Sorry, sorry.
Could we mention that the podcast and NVIDIA is calling Dylan to update him on the earnings call? Well, it's not exactly that. Go for it, go for it.
Dylan is back from his call with Jensen Huang. It was not with Jensen, Jesus.
What did they tell you, huh? What did they tell you about next year's earnings? No, it was just color around like a Hopper Blackwell and like margins. It's like quite boring stuff.
I'm sure. For most people, I think it's interesting though.
I guess we could start talking about NVIDIA. You know what? No, I think we should go back to China.
There's like a lot of points there. All right.
We covered the chips themselves. How do they get like the 10 gigawatt data center up? What else do they need? I think there is a true like question of how decentralized do you go versus centralized, right? And if you look in the US, right, as far as labs and such, the OpenAI, XAI, Anthropic, and then Microsoft having their own effort, Anthropic having their own efforts despite having their partner, and then Meta, and you go down the list it's like there's a quite a decentralization uh and then all the startups like interesting startups that are out there doing stuff there's quite a decentralization of efforts uh today in china it is still quite decentralized right it's not like alibaba baidu you are the champions right you have like deep seek like who the hell are you does government even support you like doing amazing stuff, right? If you are Xi Jinping and scale-pilled, you must now centralize the compute resources, right? Because you have sanctions on how many NVIDIA GPUs you can get in.
Now, they're still north of a million a year, right? Even post-October last year sanctions. We still have more than a million H20s and other Hopper GPUs getting in through other means, but legally like the H20s.
And then on top of that, you have your domestic chips, right? But that's less than a million chips. So then when you look at it, it's like, oh, well, we're still talking about a million chips.
The scale of data centers people are training on today slash over the next six months is 100,000 GPUs, right? OpenAI, XAI, right? These are like quite well documented and others. But in China, they have no individual system of that scale yet, right? So then the question is like, how do we get there? You know, no company has had the centralization push to have a cluster that large and train on it yet, at least publicly like well known.
And the best models seem to be from a company that has got like 10,000 GPUs, right? Or 16,000 GPUs, right? So it's not quite as centralized as the US companies are. And the US companies are quite decentralized.
If you're Xi Jinping and you're scale-pilled, do you just say XYZ company is now in charge and every GPU goes to one place? And then you don't have the same issues as the US, right? In the US, we have a big problem with like being able to build big enough data centers, being able to build substations and transformers and all this that are large enough in a dense area. China has no issue with that at all because their supply chain adds like as much power as like half of Europe every year, right? Like, or some absurd statistics, right? So they're building transformer substations, they're building new power plants constantly.
So they have no problem with like getting power density. And you go look at like Bitcoin mining, right? Around the Three Gorges Dam, at one point, at least there was like 10 gigawatts of like Bitcoin mining estimated, right? Which, you know, we're talking about, you know, gigawatt data centers are coming over, you know, 26, 27 in the, or 26-ish in the US, or 27, right? You know, sort of, this is an absurd scale relatively, right? We don't have gigawatt data centers, you know, ready, but like China could just build it in six months, I think, around the Three Gorges Dam or many other places, right? Because they have the ability to do the substations, they have the power generation capabilities.
Everything can be like done like a flip of a switch, but they haven't done it yet. And then they can centralize the chips like crazy, right? Now, oh, a million chips that NVIDIA is shipping in Q3 and Q4, the H20, let's just put them all in this one data center.
They just haven't had that centralization effort.

Well, you can argue that the more you centralize it,

the more you start building this monstrous thing within the industry,

you start getting attention to it.

And then suddenly, you know,

lo and behold, you have a little bit of a,

you have a little worm in there suddenly

where you're doing your big training run.

Oh, this GPU, off.

Oh, this GPU.

Oh, no.

Oh, no.

Oh, no. I don't know if it's like that easy to hack.
Is that a Chinese accent, by the way? Just to be clear, John is East Asian. He's Chinese.
I am of East Asian descent. Half Taiwanese, half Chinese.
Right. That is right.
I don't know if that's as simple as that to like... Because training systems are like water-gated? Firewalled? What is it called? Not firewalled.
I don't know. There's a word for that where they're not like.
Air gapped. Air gapped.
I think they're Chinese walled. You're going through like all the four elements of an avatar.
They're not. Earth protected.
Get water. Fire.
If you're Xi Jinping and you're scale-pilled. Fuck the airbenders.
Fuck the firebenders, you know. We got the avatar, right? Like, you have to build the avatar.
Okay. I think that's possible.
The question is, like, does that slow down your research? Do you, like, crush, like, cracked people like Deep Seek who are, like, clearly, like, not being, you know, influenced by the government and put some, like, idiot, you know, idiot bureaucrat at the top. Suddenly he's all thinking about like, you know, all these politics and he's trying to deal with all these different things.
Suddenly you have a single point of failure and that's a, that's, that's bad. But I mean, in the, in the, on the flip side, right? Like there is like obviously immense gains from being centralized because of the scaling loss.

Right. And then the flip side is compute efficiency is obviously going to be hurt because you can't do you can't experiment and like have different people lead and try their efforts as much if you're less centralized or more centralized.
So it's like there is a balancing act there. The fact that they can centralize, I didn't think about this, but that is actually like, because, you know, even if America as a whole is getting millions of GPUs a year, the fact that any one company is only getting hundreds of thousands or less means that there's no one person who can do a trading run as big in America as if like China as a whole decides to do one together.
The 10 gigawatts you mentioned near the three were just down. Is it literally like how how widespread is it like a state is it like one wire like how i think like between not just the dam itself but like also all of the coal there's some nuclear reactors there i believe as well um between all of nn like renewables like solar and uh wind between all of that in that region there is an absurd amount of concentrated power um that could be built i don't think it's like i'm not saying it's like one button but it's like hey within x mile radius right yeah is more more of like the uh correct way to frame it um and that's how the that's how the labs are also framing it right like i think in the u.s yeah if they started right now like how long does it take to build the biggest AI data center in the world? You know, actually, I think, the other thing is like, could we notice it? I don't think so because the amount of like factories that are being spun up, the amount of other construction, manufacturing, et cetera, that's being built, a gigawatt is actually like a drop in the bucket, right? Like a gigawatt is not a lot of power.
10 gigawatts is not an absurd amount of power, right? It's okay, yes, it's like hundreds of thousands of homes, right? Yeah, millions of people, but it's like, you got 1.4 billion people, you got like most of the world's like extremely energy intensive, like refining and like, you know, rare earth refining and all these manufacturing industries are here. It would be very easy to hide it.
Really? It would be very easy to just, like, shut down, like, I think the largest aluminum mill in the world is there, and it's, like, north of 5 gigawatts alone. It's like, oh, could we tell if they stopped making aluminum there and instead started, like, making, you know, AIs there, or making AI there? Like, I don't know if we could tell, right? Because they could also just easily spawn, like, 10 other aluminum mills, make up for the production, and production and be fine, right? So like there's many ways for them to hide compute as well.
To the extent that you could just take out a five gigawatt aluminum refining center and like build a giant data center there, then I guess the way to control Chinese AI has to be the chips because like everything else, so like how do you like, just like walk me through how many chips do they have now? How many will they have in the future? What will the, like, how many, is that in comparison to US and the rest of the world? Yeah, so in the world, I mean, the world we live in is they are not restricted at all in like the physical infrastructure side of things in terms of power, data centers, et cetera, because their supply chain is built for that, right? And it's pretty easy to pivot that. Whereas the U.S.
adds so little power each year, and Europe loses power every year. The Western sort of industry for power is non-existent in comparison, right? But on the flip side is, quote unquote, Western, including Taiwan, chip manufacturing is way, way, way, way, way larger than China's, especially on leading edge where China theoretically has, you know, depending on the way you look at it, either zero or a very small percentage share, right? And so there you have, you have wafer, you have, you have equipment, wafer manufacturing, and then you have advanced packaging capacity, right? And where the U.S.
can control China, right? So advanced packaging capacity is kind of a shot because the vast majority, the largest advanced packaging company in the world was Hong Kong headquartered. They just moved to Singapore, but like that's effectively like, you know, in a realm where the US can't sanction it, right? A majority of these other companies are in similar places, right? So advanced packaging capacity is very hard, right? Advanced packaging is useful for stacking memory, stacking chips on co-ops, right? Things like that.
And then the step down is wafer fabrication. There is immense capability to restrict China there.
And despite the U.S. making some sanctions, China in the most recent quarters was like 48% of ASML's revenue.
And like 45% of applied materials, and you just go down the list. So it's like, obviously it's not being controlled that effectively, but it could be on the equipment side of things.
The chip side of things is actually being controlled quite effectively, I think, right? Like, yes, there is like shipping GPUs through Singapore and Malaysia and other countries in Asia to China. But, you know, the amount you can smuggle is quite small.
And then the sanctions have limited the chip performance to a point where it's like, this is actually kind of fair, but there is a problem with how everything is restricted. Because you want to be able to restrict China from building their own domestic chip manufacturing industry that is better than what we ship them.
You want to prevent them from having chips that are better than what we have. And then you want to prevent them from having AIs better.
The ultimate goal being, you know, and if you read the restrictions, like very clear, it's about AI. Even in 2022, which is amazing, like at least the commerce department was kind of AI pilled.
It was like, is you want to restrict them from having AIs worse than us, right? So starting on the right end, it's like, okay, well, if you want to restrict them from having better AIs than us, you have to restrict chips, okay? If you want to restrict them from having chips, you have to let them have at least some level of chip that is better than what they can build internally. But currently, the restrictions are flipped the other way, right? They can build better chips in China than we restrict them in terms of chips that NVIDIA or AMD or Intel can sell to China.
And so there's sort of a problem there in terms of the equipment that is shipped can be used to build chips that are better than what the Western companies can actually ship them. John, Dylan seems to think the expert controls are kind of a failure.
Do you agree with him? That is a very interesting question because I think it's like... Why, thank you.
Like, what do you. Darkish, you're so good.
Yeah, Darkish, you're the best. I think failure is a tough word to say because I think it's like, what are we trying to achieve, right? Like, they're talking about AI, right? Yeah.
When you do sanctions like that, you need like such a deep knowledge of the technologies. You know, just taking lithography, right? If your goal is to restrict China from building chips and you just boil it down to like, hey, lithography is 30% of making a chip or 25%.
Cool, let's sanction lithography. Okay, where do we draw the line? Okay, let me ask.
Let me ask. Let me figure out where the line is.
And if I'm a bureaucrat, if I'm a lawyer at the commerce department or what have you, well, obviously I'm going to go talk to ASML and ASML is going to tell me this is the line because they know like, Hey, well, you know, this, this, this is, you know, there's like some blending over. There's like, they're, they're like looking at like what's going to cost us the most money.
Right. And then they constantly say like, if you restrict us, then China will have their own industry.
Right. And, and the way I like to look at it is like chip manufacturing is like, like 3D chess or like, you know, a massive jigsaw puzzle in that if you take away one piece, China can be like, oh, that's the piece.
Let's put it in. Right.
And currently this export restrictions year by year by year, they keep updating them ever since like 2018 or so, 19, right? When Trump started and now Biden's, you know, accelerated them. They've been like, they haven't just like take a bat to the table and like break it, right? Like it's like, let's take one jigsaw puzzle out, walk away.
Oh shit. Let's take two more out.
Oh shit. Right? Like, you know, it's like, instead if they like, they, you either have to go kind of like full bat to the fricking like table slash or chill out, right? Like, and like, you know, let them do whatever they want.
Because the alternative is everything is focused on this thing and they make that. And then now when you take out another two pieces, like, well, I have my domestic industry for this.
I can also now make a domestic industry for these. Like you go deeper into the tech tree or what have you.
It's a very, it's art, right? In the sense that there are technologies out there that can compensate. Like if you believe the belief that lithography is a linchpin within the system is, it's not exactly true, right? At some point, if you keep pulling, keep pulling a thread, other things will start developing to kind of close that loop.
And like, I think it's, it's, it is, that's why I say it's an art, right? I don't think you can stop Chinese semiconductor industry, the semiconductor industry from progressing. I think that's basically impossible.
So the question is, the Chinese government believes in the primacy of semiconductor manufacturing. They used, they've believed it for a long time, but now they really believe it, right? To some extent, the sanctions have made China believe in the importance of the semiconductor industry more than anything else.
So from an AI perspective, what's the point of export controls then? Because even if like, if they're going to be able to get these, like if you're like concerned about AI, and they're going to be able to build... Well, they're not centralized though, right? So that's the big question is, are they centralized? And then also, you know, there's the belief.
I don't really, I'm not sure if I really believe it, but like, you know, prior podcasts, there have been people who talked about nationalization, right? In which case, okay, now you're talking about- Why are you referring to this ambiguously? Well, I think there's a couple- My opponent. I love my opponent, you know? No, but I think there have been a couple where people have talked about nationalization, right? But like if you have nationalization, then all of a sudden you aggregate all the flops.
It's like, no, there's no fucking way, right? China can be centralized enough to compete with each individual U.S. lab.
They could have just as many flops in 25 and 26 if they decided they were scale-pilled, right? Just from foreign chips for an individual model. In 2026 2026, they can train a 1E27, like they can release a 1E27 model by 2026.
Yeah, and then a 28 model, you know, 1E28 model in the works, right? Like they totally could just with foreign chips apply, right? Just a question of centralization. Then the question is like, do you have as much innovation and compute efficiency wins or what have you get developed when you centralize? Or does like Anthropic and OpenAI and XAI and Google like all develop things and then

like secrets kind of shift a little bit in between each other and all that?

Like, you know, you end up with that being a better outcome in the long term versus like

the nationalization of the US, right?

If that's possible and like, or, you know, and what happens there.

But China could absolutely have it in 26, 27 if they just have the desire to.

And that's just from foreign chips, right? And then domestic chips are the other question, right? 600,000 of the Ascend 910B, which is roughly like 400 teraflops or so. You know, so if they put them all in one cluster, they could have a bigger model than any of the labs next year, right? I have no clue where all the Send 910Bs are going, right? But I mean, well, there's like rumors about like some, they are being divvied up between the like major Alibaba, ByteDance, Baidu, et cetera.
And next year, more than a million. And it's possible that they actually do have, you know, one E30 before the US because data center is not as big of an issue.
10 gigawatt data center is going to be, I don't think anyone is even trying to build that today in the U.S., like even out to 27, 28, really they're focusing on like linking many data centers together. So there's a possibility that like, hey, come 2028, 2029, China can have more flops delivered to a single model, even ignoring sort of, even once the centralization question is solved, right? Because that's clearly not happening today, uh, for either party.
Um, and I would bet if AI is like as important as, you know, you and I believe that they will centralize sooner than the West does. Yeah.
Um, so, so there is a possibility, right? Yeah. It seems like a big question then is how much could SMIC either increase the product, like increase the amount of wafers, like how many more wafers could they make and how many of those wafers could be dedicated to the night? Because I assume there's other things they want to do with these semiconductors.
So there's like two points, parts there too, right? Like, so the way the US has sanctioned SMIC is really like stupid is that in that they've, like, sanctioned a specific spot rather than the entire company. And so, therefore, right, SMIC is still buying a ton of tools that can be used for their 7 nanometer and their, you know, call it 5.5 nanometer process or 6 nanometer process for the 910C, which releases later this year, right? They can build as much of that as long as it's not in Shanghai, right? And Shanghai has anywhere from 45 to 50 high-end immersion lithography tools is what's believed by intelligence as well as many other folks.
That roughly gives them as much as 60,000 wafers a month of 7 nanometer, but they also make their 14 nanometer in that fab. Right.
Um, and so the belief is that they actually only have about like 25 to 35,000 of seven nanometer capacity, um, wafers a month, right? Yeah. Doing the math, right.
Um, are the chip die size and all these things? Cause, uh, probably also uses chiplets and stuff so they can get away with, uh, using less leading edge wafers. But then their yields are bad.
You can roughly say something like 50 to 80 good chips per wafer with their bad yield, right? With their bad yield. Why do they have bad yield? Because it's hard, right? You know, you're – Even if it was like – everyone knows the number, right? It's like 1,000 steps even if you're 99% for each.
Like 98% or 99%, like in the end, you'll still get a 40% yield overall. Interesting.
I think it's like, even it's like 99. If I think it's like, I think, I think it's, if it's six sigma of like, or of like perfection and you have your 10,000 plus steps, you end up with like, yield is still dog shit by the end, right? Like, yeah.
That is a scientific measure. Dog shit percent.

Yeah, yeah. As a multiplicative effect,

right? Yeah. So, yields are bad because they

have hands tied behind their back, right? Like,

A, they are not

getting to use EUV,

whereas on 7 nanometer Intel

never used EUV, but TSMC

eventually started using EUV. Initially,

they used EUV, right? Doesn't that mean the expert control

succeeded? Because they have bad

yield because they have to use like... Success is, again, they still are determined.
Success is mean they stop. They're not stopping.
Going back to the yield question, right? Like, oh, theoretically, 60,000 wafers a month times 50 to 100 dyes per wafer with yielded dyes? Holy shit, that's that's millions of GPUs right now. What are they doing with most of their wafers? They still have not become skill pilled.
So they're still throwing them at like, let's make 200 million Huawei phones. Right.
Like, oh, OK, cool. I don't care.
Right. Like as as the West, you don't care as much, even though like Western companies will get screwed like Qualcomm and like, you know, and MediaTek Taiwanese companies.
So obviously there's that. And the same applies to the US.
But when you when you flip to like, sorry, I don't fucking know what I was gonna say. Nailed it.
We're keeping this in. That's fine.
That's fine. That's fine.
Hey, everybody. I am super excited to introduce our new sponsors, Jane Street.
They're one of the world's most successful trading firms. I have a bunch of friends who either work there now or have worked there in the past.
And I have very good things to say about those friends. And those friends have very good things to say about Jane Street.
Jane Street is currently looking to hire its next generation of leaders. As I'm sure you've noticed, recent developments in AI have totally changed what's possible in trading.
They've noticed this too, and they've stacked a scrappy, chaotic new team with tens of millions of dollars of GPUs to discover signal that nobody else in the world can find. Most new hires have no background in trading or finance.
Instead, they come from math, CS, physics, and other technical fields. Of particular relevance to this episode, their deep learning team is hiring CUDA programmers, FPGA programmers, and ML researchers.
Go to janestreet.com slash Dwarkesh to learn more. And now back to Dylan and John.
2026, if they're centralized, they can have as big training runs as any one US company. Oh, the reason why I was bringing up Shanghai, they're building seven nanometer capacity in Beijing.
They're building five nanometer capacity in Beijing, but the US government doesn't care. And they're importing dozens of tools into Beijing.
And they're saying to the US government and ASML, this is for 28 nanometer, obviously, right? This is not bad. And then obviously, you know, like in the background, yeah, we're making five nanometer here, right? Are they doing it because they believe in AI or because they want to make Huawei phones? You know, Huawei was the largest TSMC customer for like a few quarters, actually, before they got sanctioned.
Huawei makes most of the telecom equipment in the world, right? You know, phones, of course, modems. But, of course, accelerators, networking equipment.
You know, you go down the whole, like, video surveillance chips, right? Like, you kind of, like, go through the whole gambit. A lot of that could use 7 and 5 nanometer.
Do you think the dominance of Huawei is actually a bad thing for the rest of the Chinese tech industry? I think Huawei is so fucking cracked that, like, it's hard to say that, right? Like Huawei outcompetes Western firms regularly with two hands tied behind their back. Like, you know, like what the hell is Nokia and like Sony Ericsson? Like trash, right? Like compared to Huawei and Huawei is not allowed to ship sell to like European companies or American companies and they don't have TSMC and yet they still destroy them, right? And same applies to like the new phone, right? It's like, oh, it's like as good as like a year old Qualcomm phone on a process node that's equivalent to like four years old, right? Or three years old.
So it's like, wait, so they actually out-engineered us with a worst process node, you know? So it's like, oh, wow, okay. Like, you know, Huawei is,

like, crazy cracked.

Why do you think that culture comes from?

The military, because it's the PLA.

It is generally seen as an arm of the PLA.

But, like, how do you square that

with the fact that sometimes the PLA

seems to mess stuff up?

Oh, like filling water and rockets?

I don't know if that was true. I'm not denying it.
There is like that crazy conspiracy, not conspiracy, you don't know what the hell to believe in China, especially as a not Chinese person. Nobody knows, even Chinese people don't know what's going on in China.
There's like all sorts of stuff like, oh, they're filling water in their rockets. Clearly they're incompetent.
It's like, look, if I'm the Chinese military, I want the Western world to like believe I'm completely incompetent because one day I can just like destroy the fuck out of everything. Right.
With all these hypersonic missiles and all this shit. Right.
Like drones and like, no, no, no, no. We're filling water in our missiles.
These are all fake. We don't actually have a hundred thousand missiles that we manufacture in a facility that's like super hyper advanced and Raytheon is stupid as shit because they can't make, you missiles nearly as fast right like i think like that's also like a flip side is like how much false propaganda is there right because there's a lot of like no smic could never smic could never they don't have the best tools but blah blah and then it's like motherfucker they just shipped 60 million phones last year with this chip that performs only one year worse than like what qualcomm has it's like proof proof is in the pudding, right? Like, you know, there's, there's a lot of like cope, if you will.
I just wonder where it comes from. I do really do just wonder where that culture comes from.
Like there's something crazy about them where they're kind of like everything they touch, they seem to succeed in. And like, I kind of wonder why.
They're making cars. I wonder if it's going on there.
I think like if like supposedly, supposedly like if we kind of imagine like historically like do you think they're getting something from somewhere what do you mean espionage you mean yeah like obviously like east germany and the soviet industry was basically it was just it was like a conveyor belt of like secrets coming in and they're just use that to run everything but the soviets were never good at it they could never mass produce it how would espionage explain how they can make things with different processes? I don't think it's just espionage. I think they're just literally cracked.
It has to be something else. They have the espionage without a doubt.
Right? Like, ASML has been known to have been hacked a dozen times. Right, right.
Or at least a few times, right? And they've been known to have people sued who made it to China with a bunch of documents, right? Not just ASML, but every fucking company in the supply chain. Cisco code was literally in, like, early Huawei, like, routers and stuff, right? Like, you go down the list, it's like, everything is, but then it's like, no, architecturally, the Ascend 910B looks nothing like a GPU.
It looks nothing like a TPU. It is, like, its own independent thing.
Sure, they probably learned some things from some places, but, like, it is just, like, they're good at engineering. It's 996.
Like, wherever that culture comes from they they they do good yeah they do very good i know well another thing i'm curious about is like yeah where their culture comes from is about like how does it stay there because with american firms or any other firm um you can have a company that's very good but over time it gets worse right like intel or many others um i guess huawei just isn't that old of a company but uh like it's hard be a big company and stay good. That is true.
I think a word that I hear a lot with regards to Huawei is a struggle, right? And China has a culture of the Communist Party that's really big on struggle. I think Huawei, in a sense, they sort of brought that culture into the way they do it.
Like you said before, right? They go crazy because they think that in five years that they're going to fight the United States. And so like literally everything they do every second is like their country depends on it, right? It's like it's the Andy Grovian mindset, right? Like shout out to like the base intel, but like only the paranoid survive, right? Like paranoid Western companies do well.
Why did, why did Google like really screw the pooch on a lot of stuff? And then why are they like resurging kind of now is because they got paranoid as hell, right? But they weren't paranoid for a while. Um, if Huawei is just constantly paranoid about like the external world and like, Oh fuck, we're going to die.
Oh fuck. Like, you know, they're going to beat us.
Uh, our country depends on it. We're going to get the best people from the entire country that are like you know the best at whatever they do and tell them you will if you do not succeed you will die or like you will die your family will die your family will be enslaved and everything like it'll be terrible by the evil western pigs right even western the economy like capital or not capital they don't believe in they don't say that anymore but it's like you know everyone is against china china is being it's being defiled right and like they're saying like if you that is all on you bro like if you can't do that then like you if you can't get that fucking radio to be slightly less noisy and like transmit like five percent more data it's like the great palace fire all over again the british are coming and They will steal all the trinkets and everything.
That's on you. Why isn't there more vertical integration in the semiconductor industry? Why are there like, this subcomponent requires this other subcomponent from this other company which requires this subcomponent from another company? Why is more of it not done in-house? The way to look at it today is it's super, super stratified, and every industry has anywhere from one to three competitors.
And pretty much the most competitive it gets is like 70% share, 25% share, 5% share in any layer of like manufacturing chips, anything, anything, chemicals, different types of chips. But it used to be vertically integrated.
Or the very beginning, it was integrated, right? Where did that stop? What happened was, you know, the funniest thing was, like, you know, you had companies that used to do it all in the one. And then suddenly, sometimes a guy would be like, I hate this.
I think I know how to do better. Spins off, does his own thing, starts his company, goes back to his old company, says, I can sell you a product that's better, right? And that's the beginning of what we call the semiconductor manufacturing equipment industry.
Like, basically, in the 70s, right? Like everyone made their own equipment. 60s and 70s, like you spin off all these people.
And then what happened was that the companies that accepted these outside products and equipment got better stuff. They did better.
Like you can talk about a whole bunch. Like there are companies that were totally vertically integrated in semiconductor manufacturing for decades.
And they are still good, but they're nowhere near competitive.

One thing I'm confused about is like the actual foundries themselves, there's like fewer and fewer of them every year, right? So there's like maybe more companies overall, but like the final people like who make the wafers, there's less and less. and then it's interesting in a way

it's similar to like the AI foundation models

where you need to use like the revenues

from like a previous model in order or like your market share to like fund the next round of ever more expensive development. When TSMC launched the foundry industry right and when they started there was a whole wave of like Asian companies that funded semiconductor foundries of their own.
You had Malaysia with Sotera. You have Singapore with Chartered.
There was a wide semiconductor, which I talked about earlier. There's ones from Hong Kong.
Bunch in Japan. Bunch in Japan.
They all sort of did this thing. And I think the thing was that when you're going to leading edge, when the thing is that it, it got harder and harder, which means that you had to aggregate more demand

from all the customers to fund the next node, right?

So technically, in the sense that what it's kind of do

is aggregating all this money, all this profit

to kind of fund this next node

to the point where now like,

there's no room in the market for an N2 or N3.

Like technically, you could argue that,

economically, you can make an argument that like N2 is a monstrosity that doesn't make sense economically it would should not exist in some ways without the immense single concentrated spend of like five players in the market i'm sorry to like completely derail you but like there's this video where it's like uh there's an unholy concoction of meat slurry yes what sorry there's like a video that's like ham is disgusting it's an unholy concoction of like meat with no bones or collagen and like i don't know like to use like the way he was describing two nanometers kind of like that right it's like the guy who pumps his right arm so much and he's like super muscular.

The human body was not meant to be so muscular.

What's the point?

Why is 2 nanometer not justified?

I'm not saying N2 is like N2 specifically,

but I say N2 as a concept.

The next node should technically,

like right now,

there will come a point where economically the next node will not be possible.

Like at all, right?

Unless more technology spawn, like AI know, one nanometer or whatever. There was a long period of time.
16A viable, right? So, like, right before AI spawned. It makes it viable, as in, like, it makes it worth it? So, every two years, you get a shrink, right? Yeah.
Like, clockwork, Moore's Law. And then, five nanometer happened.
It took three years. Holy shit.
And then 3 nanometer happened. It took three, or no, sorry.
It took three years. Holy shit.
Like, is Moore's Law dead, right? Like, because TSMC didn't. And then what did Apple do? Even on the third year of three, or sorry, when 3 nanometer finally launched, they still only, Apple only moved half of the iPhone volume to 3 nanometer.
So this is like now they did a fourth year of five nanometer for a big chunk of iPhones. Right.
And it's like, oh, is the mobile industry petering out? Then you look at two nanometer and it's going to be a similar, like very difficult thing for the for the industry to pay for this. Right.
Apple, of course, they have, you know, because they get to make the phone, they have so much profit that they can funnel into, like, more and more expensive chips. But finally, like, that was running out, right? How economically viable is two nanometer just for one player, TSMC? You know, ignore Intel, ignore Samsung.
Just, you know, because Samsung is paying for it with memory, not with their actual profit. And then Intel is paying it from their former CPU monopoly um private equity money and now and now private equity money and debt and subsidies people's salaries yeah but like anyways like you know there's there's a strong argument that like funding the next node would not be economically viable anymore if it weren't for ai taking off right and then generating all this humongous demand for the most leading edge chip.
So how big is the difference between seven to five to three nanometers? Is it a huge deal in terms of who can build the biggest cluster? So there's this simplistic argument that like, oh, moving a process node only saves me X percent in power, right? And that has been petering out, right? You know, when you move from like 90 nanometer to 80 something, right? Or 70 something, right? It was like, you got 2x, right? Denard scaling was still intact, right? But now when you move from 5 nanometer to 3 nanometer, first of all, you don't double density. S-Gram doesn't scale at all.
Logic does scale, but it's like 30%. So all in all, you only save like 20% in power per transistor.
But because of like data locality and movement of data, you actually get a much larger improvement in power efficiency by moving to the next node than just the individual transistors power efficiency benefit. Because, you know, for example, you're multiplying a matrix that's like, you know, 8,000 by 8,000 by 8,000.
And then like, you can't fit that all on one chip. But if you could fit more and more, you have to move off chip less, you have to go to memory less, et cetera, right? So the data locality helps a lot too.
But the AI really, really, really wants new processed nodes because of A, power used is a lot less now, higher density, higher performance, of course. But the big deal is like, well, if I have a gigawatt data center, I can now, how much more flops can I get? If I have two gigawatt data center, how much more flops can I get? If I have a 10 gigawatt data center, how much more flops can I get? Right? And like you, you look at the scaling and it's like, well, no, everyone needs to go to the most recent process node as soon as possible.
I want to ask the normie question, uh, for like everybody's world. I want to phrase it that way.
Okay. I want to ask a question that's like, not for you nerds you nerds.
I think John and I could communicate to the point where you wouldn't even know what the fuck we're talking about. Okay.
Suppose Taiwan is invaded or Taiwan has an earthquake. Nothing is shipped out of Taiwan.
From now on, what happens next? The rest of the world, how would it feel its impact? A day in, a week in, a month in, a year in? I mean, it's a terrible thing. It's a terrible thing to talk about.
I think it's like, can you just say it's all terrible? Everything's terrible? Because it's not just like leading edge. Leading edge, people will focus on leading edge.
But there's a lot of trailing edge stuff that like people depend on every day. I mean, we all worry about AI.
The reality is you're not going to get your fridge. You're not going to get your cars.
You're not going to get everything. It's terrible.
And then there's the human part of it, right? It's all terrible. Can we, like, it's depressing.
I think. And I live there.
Yeah. I think day one market crashes a lot, right? You got to think about, like, I think the big, like, big six, six, six biggest companies, Magnificent Seven, whatever the heck it's called, are like 60, 75% of the S&P 500 and their entire business relies on chips, right? Google, Microsoft, Apple, NVIDIA, you know, you go down the list, right? They are, they are, Meta, right? They all entirely rely on AI.
And you would have a tech reset, like extremely insane tech reset, by the way, right? Like, so market would crash a week, a day in, a couple of weeks in, right? Like people are preparing now. People are like, oh shit, like let's start building fabs.
Fuck all the environmental stuff. Like war is probably happening.
But like the supply chain is trying to like figure out what the hell to do to refix it. But six months in, the supply of chips for making new cars, gone or sequestered to make military shit, right? You can no longer make cars.
And we don't even know how to make non-semiconductor-induced cars, right? Like this unholy concoction with all these like chips, right? Cars like 40% chips now. Like it's just chips in the tires.
There's like 2,000 plus chips. Every Tesla door handle has like four chips in it.
It's like, what the fuck? Like, why? But like, it's like shitty like microcontrollers and stuff. But like, there's like 2,000 plus chips even in an ICE vehicle, like internal combustion engine vehicle, right? And every engine has dozens of dozens of chips, right? Anyways, this all shuts down because not all of the production.
There's some in Europe. There's some in the US.
There's some in Japan. Yeah, some in Singapore.
They're going to bring in a guy to work on Saturday until four. Yeah, I mean, yeah.
So you have like, TSMC always builds new fabs. That old fab, they tweak production up a little bit more and more and new designs move to the next, next, next node and old stuff fills in the old nodes, right? So, you know, ever since TSMC has been the most important player, and not just TSMC, there's UMC there, there's PSMC there, there's a number of other companies there, Taiwan's share of like total manufacturing has grown every single process node.
So in like 130 nanometer, there's a lot, and including like many chips from like Texas Instruments or analog devices or like NXP, like all these companies, 100% of it is manufactured in Taiwan, right, by, you know, either TSMC or UMC or whatever. But then you like step forward and forward and forward, right? Like 28 nanometer, like 80% of the world's production of 28 nanometers in Taiwan.
Oh fuck. Right.
Like, you know, and everything in 28 nanometers, like what's made on 28 nanometer today, tons of microcontrollers and stuff, but also like every display driver I see, like, cool. Like, even if I can make my Mac chip, I can't make the chip that drives the display like you know you just go down the list like everything no fridges no no automobiles no no weed whackers because that shit has my toothbrush has fucking bluetooth in it right like why i don't know but like you know there's like so many things that like just like poof we're tech reset we were supposed to do this interview like many months ago and then i kept like delaying because i'm like, ah, I don't understand any of this shit.
But like, it is like a very difficult thing to understand. But I feel like with AI, it's like, it's not that.
No, you've just spent time. You've spent the time.
But I also feel like it's like less complicated. It feels like it's a kind of thing where like in an amateur kind of way, you can like, you know, pick up what's going on in the field.
In this field, like the thing i'm curious about is like how how does one learn the layers of the stack because the layers of stack are like there's not just the papers online you can't just like look up the the tutorial on how the transformer works or whatever it's like yes i mean like many layers of really difficult there are like 18 year olds who are just cracked at ai right already right and like there's high school dropouts that get, like, jobs at OpenAI. This existed in the past, right? Pat Gelsinger, current CEO of Intel, went straight to work.
He, like, grew up in the Amish area of Pennsylvania, and he went straight to work at Intel, right? Because he's just cracked, right? That is not possible in semiconductors today. You can't even get, like, a job at, like, a tool company without, like, at least, like, a freaking master's in chemistry, right? And probably a PhD, like of the like 75 000 tsmc workers it's like 50 000 have a phd or something insane right it's like okay this is like there's like some there's like a next level amount of like how specialized everything's gotten whereas today like you can take like you know sholto you know he when did he start working on ai not that long ago not to say anything bad about Sholto.
No, no, no, but he's cracked. He's like Omega cracked at like what he does.
What he does, you could pick him up and drop him into another part of the AI stack. First of all, he understands it already.
And then second of all, he could probably become cracked at that too. Right.
Whereas that is not the case in semiconductors. Right.
You can, you, one, you like specialize like crazy to, you can't just pick it up. Um, you know, like Schultz, I think, what did he say? He like just started like, he was a consultant in McKinsey.
And at like night he would like read papers about robotics and like run experiments and whatever. Yeah.
And then, and then like, he like was like, like people noticed, he was like, who the hell is this guy? And why is he posting this? Like, I thought everyone who knew about this was at google already right it's like come to google right that can't happen in semiconductors right like it's just not like conducively like it's not possible right one archive is like a free thing um the paper publishing industry is like abhorrent everywhere else and you just like cannot download i triple e papers or like SPIE papers or like other organizations. And then two, at least up until like late 2022 or really early

2023 in the case of Google, right? I think what the Palm inference paper up until the Palm

inference paper before that, all the good best stuff was just posted on the internet. After that,

you know, it's kind of a little bit clamping down by the labs, but there's also still all these

other companies making innovations in the public that, and, what is state of the art is public. That is not the case in semiconductors.
Semiconductors have been shut down since the 1960s, 1970s, basically. I mean, like, it's kind of crazy how little information has been formally transmitted from one country to another.
Like, the last time you could really think of this was like 19, maybe the Samsung era, right? So then how do you guys keep up with it? Well, we don't know it. I don't personally.
I don't think I know it. I don't, I mean, I...
If you don't know it, what are you making videos about? It's crazy because like there's a guy, there's like, I spoke to one guy, he's like a PhD in Etch or something. The world, one of the top people in Etch, and he's like, man, you really know like lithography, right? And I'm just like, I don't feel like I know lithography.
But then you've talked to the people who know lithography, you've done pretty good work in packaging, right? Nobody knows anything. They all have gel man amnesia.
They're all in this, like, single well, right? They're digging deep. They're digging deep for what they're getting at.
But, you know, they don't know the other stuff well enough. And in some ways, I mean, nobody knows the whole stack.
Nobody knows the whole stack. The stratification of just, like, manufacturing is absurd.
Like, the tool people don't even know exactly what Intel and TSMC do in production. And vice versa.
They don't know exactly how the tools optimize like this. And it's like, how many different types of tools there are? Dozens.
And each of those has, like, an entire tree of, like, all the things that we've built. All the things we've invented, all the things that we continue to iterate upon.
And then like here's the breakthrough innovation that happens every few years in it too. So if that's the case, if like nobody knows the whole stack, then how does the industry coordinate to be like, you know, in five, in two years, we want to go to the next process, which has get all around.
And for that, we need X tools and X technologies developed by whatever. That's really fascinating.
It's a fascinating social kind of phenomenon, right? You can feel it. I went to Europe earlier this year.
Dylan was like, had allergies. But like, I was like, talking to those other people.
And you can just, it's like gossip. It's gossip.
You start feeling the, you start feeling people coalescing around like a something, right? Early on, we used to have like Semitech where people, all these American companies came together and talked and they came and they hammered out, right? But Semitech in reality was dominated by a single company, right? But then, you know, nowadays it's a little more dispersed, right? You feel like it's a blue moon arising kind of thing.

Like they are going towards something.

They know it.

And then suddenly the whole industry is like,

this is it.

Let's do it.

I think it's like God came and proclaimed it.

We will shrink density 2X every two years.

Gordon Moore, he made an observation

and then like it didn't go nowhere.

It went way further than he ever expected

because it was like,

oh, there's line of sight to get to here and here. And like and he predicted like seven, eight years out, like multiple orders of magnitude of increases in transistors.
And it came true. But then by then, the entire industry was like, this is obviously true.
This is the word of God. And every engineer in the entire industry, tens of millions of people, like literally, this is what they were driven to do.
No, no, every single engineer didn't believe it. But like people were like, yes, to hit the next shrink, we must do this, this, this, right? And this is the optimizations we make.
And then you have this stratification, every single layer and abstraction layers, every single layer through the entire stack to where people, it's an unholy concoction. I mean, you keep saying this word, but like you, no one knows what's going on because there's an abstraction layer between every single layer.
And on this layer, the people below you and the people above you know what's going on. And then beyond that, it's like, okay, I can try to understand, but not really like...
But I guess that doesn't answer the question of when IRDS or whatever, I don't know, was it 10, 20 years ago? I watched your video about it where they're like, we are EUV is like, this is we're going to do EUV instead of the other thing. And this is the path forward.
How do they do that if they don't have the whole sort of picture of like different constraints, different tradeoffs, different blah, blah, blah. They kind of they argue it out.
They get together and they talk and they argue. And basically at some point, a guy somewhere says, I think we can move forward with this.

semiconductors are so siloed and the data and knowledge within each layer is a not documented

online at all right documentation because it's all siloed within companies um b it is there's a

lot of human element to it because a lot of the knowledge like as john was saying is like

apprentice master apprentice master type of uh knowledge or i've been doing this for 30 years

Thank you. Um, B it is, there's a lot of human element to it because a lot of the knowledge, like as John was saying, is like apprentice master, apprentice master type of, uh, knowledge, or I've been doing this for 30 years and there's an, an, an amazing amount of intuition on what to do just when you see something, um, to where like AI can't just learn semiconductors like that.
But at the same time, there's a massive amount of talent shortage and ability to move forward on things, right? So like the technology used on like, like most of the like equipment in semiconductor tool fabs runs on like Windows XP, right? Like the each tool has like a Windows XP server on it, or like, you know, like all the chip design tools, like have like CentOS, CentOS, like version six, right? And like, that's old as hell, right? So like, there's like so many like areas where like, why is this so far behind? At the same time, it's like, so like hyper optimized. That's like the, the tech stack is so broken in that sense.
They're afraid to touch it. They're afraid to touch it.
Yeah. Because it's an unholy amalgamation.
It's unholy. It should not be work.
It should not work. This thing should not work.
It's literally a miracle. So you have all the abstraction layers, but then it's like, one is there's a lot of breakthrough innovation that can happen now stretching across abstraction layers.
But two is because there's so much inherent knowledge in each individual one, what if I can just experiment and test at a thousand X velocity or a hundred thousand X velocity? And so some examples of where this is already shown true is some of NVIDIA's AI layout tools, right? And Google as well, like laying out the circuits within a small blob of the chip with AI. Some of these like RL design things, some of these, there's a lot of like various like simulation things.
But is that design or is that manufacturing? It's all design, right? Most of it's design. Manufacturing has not really seen much of this yet, although it's starting to come in.
Inverse lithography, maybe. Yeah, ILT and Sam, maybe.
I don't know if that's AI. That's not AI.
Anyways, there's tremendous opportunity to bring breakthrough innovation simply because there is so many layers where things are unoptimized, right? So you see like all these like, oh, single digit, mid, you know, low double digit like advantages just from like RL techniques from like AlphaGo type stuff, like, or like not RL from AlphaGo, but like five, six, seven, eight year old RL techniques being brought in. But like generative AI being brought in could like really revolutionize the industry, you know, although there's a massive data problem.
And can you give those, can you give the possibilities here in numbers in terms of maybe like a flop per dollar or whatever the relevant thing here is? Like, how much do you expect in the future to come from process node improvements? How much from just like how the hardware is designed because of AI? If you like had to disaggregate, we're talking specifically for like GPUs. Yeah.
Like if you had to disaggregate future improvements. I think, you know, it's first, it's important to state that semiconductor manufacturing and design is the largest search base of any problem that humans do because it is the most complicated industry that anything that humans do.
And so, you know, when you think about it, right, there's 1E10, 1E11, right? 100 billion transistors on leading edge chips, right? Blackwell has 220 billion transistors or something like that. So what is, and those are just on-off switches.
And then think about every permutation of putting those together, contact ground, et cetera, drain source, blah, blah, blah, with wires, right? There's 15 metal layers, right? Connecting every single transistor in every possible arrangement. This is a search space that is literally almost infinite, right? You could like, the search space is much larger than any other search space that humans know.
And what is the nature of the search? Like, what are you trying to optimize over? Well, useful compute, right? What is, you know, if the goal is optimize intelligence per picojoule, right? And intelligence is some nebulous nature of like what the model architecture is. And then picojoule is like a unit of energy, right? How do you optimize that? So there's humongous innovations possible in architecture, right? Because vast majority of the power on a H100 does not go to compute.
And there are more efficient like compute, you know, ALUs, Arthematologic Unit, like designs, right? But even then, the vast majority of the power doesn't go there, right? The vast majority of the power goes to moving data around, right? And then when you look at what is the movement of data, it's either networking or memory. You know, you have a humongous amount of movement relative to compute and a humongous amount of power consumption relative to compute.
And so how can you minimize that data movement and then maximize the compute? There are 100x gains from architecture.

Even if we literally stopped shrinking,

I think we could have 100x gains

from architectural advancements.

Over what time period?

The question is how much can we advance the architecture, right?

The other challenge is the number of people designing chips

has not necessarily grown in a long time, right?

Yeah, company to company, it shifts. But like within like the semiconductor industry in the US and the US makes, you know, designs the vast majority of leading edge chips, the number of people designing chips has not grown much.
What has happened is the output per individual has soared because of EDA, Electronic Design Assistance Tooling, right? Now, this is all still like classical tooling. There's just a little bit of inkling of AI in there yet, right? What happens when we bring this in is the question and how you can solve this search space somehow with humans and AI working together to optimize this so it's not most of the power's data movement.
And then the compute is actually very small. To flip side, the compute is, first of all, compute can get like 100x more efficient just with like design changes, and then you can minimize that data movement massively, right? So you can get a humongous gain in efficiency just from architecture itself.
And then process node helps you innovate that there, right? And power delivery helps you innovate that. System design, chip-to-chip networking helps you innovate that, right? Like memory technologies, there's so much innovation there.
And there's so many different vectors of innovation that people are pursuing simultaneously

to where like NVIDIA gen-to-gen-to-gen will do more than 2x performance per dollar. I think that's

very clear. And then like hyperscalers are probably going to try and shoot above that,

but we'll see if they can execute. There's like two narratives you can tell here of how this

Thank you. I think that's very clear.
And then like hyperscalers are probably going to try and shoot above that, but we'll see if they can execute. There's like two narratives you can tell here of how this happens.
One is that these AI companies who are training the foundation models, who understand the trade-offs of like how much is the marginal increase in compute versus memory worth to them and what trade-offs do they want between different kinds of memory. They understand this.
And so therefore the accelerators they build, they can make these sort of trade-offs in a way that's like most optimal or, and also design like the architecture of the model itself in a way that reflects like what are the hardware trade-offs. Another is NVIDIA because it has like, I don't know how how this works.
Presumably they have some sort of, like, know-how. Like, they're accumulating all this, like, knowledge about how to better design this architecture and, like, also better search tools for it and so on.
Who has basically, like, better moat here in terms of, will NVIDIA keep getting better at getting this 100x improvement or will it it be like OpenAI and Microsoft and Amazon and Anthropic who are designing their accelerators who will keep getting better at designing the accelerator? I think that there's a few vectors to go here, right? One is, you mentioned, and I think it's important to note, is that hardware has a huge influence on the model architecture that's optimal. And so it's not a one-way street that better chip equals, you know, the optimal model for Google to run on TPUs, given a given amount of dollars, a given amount of compute is different architecturally than what it is for OpenAI with NVIDIA stuff, right? It is like absolutely different.
And then like, even down to like networking decisions that different companies do and data center design decisions that people do, the optimal, like if you were to say, you know, X amount of compute of TPU versus GPU, compute optimally, what is the best thing? You'll diverge in what the architecture is. And I think that's important to know, right? Can I ask about that real quick? So earlier we were talking about how China has the H20s or B20s.
And there there's like much less compute per memory bandwidth and like the amount of memory, right? Does that mean that Chinese models will actually have like very different architecture and characteristics than American models in the future? So you can take this to like a very like large leap and it's like all, you know, neuromorphic computing or whatever is like the optimal path. And that looks very different than like what a transformer does um or you could take it to like a simple thing which is like the level of sparsity and like of course grain sparsity i.e like experts and all this sort of stuff um the arrangement of like what exactly the attention mechanism is because there are a lot of tweaks uh it's not just like pure transformer attention right or like hey dmod like how wide versus tall the model is right right? That's very important, like demod versus number of layers, right? These are all things that would be different, and I know they're different between, say, a Google and an open AI and what is optimal.
But it really starts to get like, hey, if you were limited on a number of different things, like China invests invests humongously in compute and memory, you know, which is like, basically, the memory cell is directly coupled or is the the compute cell, right? So these are like things that like China's investing hugely. And you go to conferences, like, oh, there's 20 papers from Chinese companies slash universities about compute and memory, or like, you know, hey, like, because the flop limitation is here, maybe NVIDIA pumps up the on-chip memory and like changes the architecture because they still stand to benefit tens of billions of dollars by selling chips to China, right? Today, it's just like neutered American chips, right? Neutered chips that go to the US, but like, it'll start to diverge more and more architecturally because they'd be stupid not to make chips for China, right? And Huawei, obviously, again, has their constraints, right? Like, where are they limited on memory? Oh, they have a lot of networking capabilities, and they could move to certain optical networking technologies directly onto the chip much sooner than we could, right? Because that is what's optimal for them within their search space of solutions, right? Because this whole area is blocked off.
It It's kind of really interesting to see, to think about like the development of how Chinese AI models will differ from American AI models because of these changes or these constraints. And it applies to use cases.
It applies to data, right? Like American models are very important about like, let me learn from you, right? Let me be able to use you directly as a random consumer, right? That is not the case for Chinese model, I assume, right? Because there's probably very different use cases for them. China crushes the West at video and image recognition, right? At ICML, like Albert Gu of Cartesia, like state-space models, like every single Chinese person was like, can I take a selfie with you? Man was harassed.
In the US, like you see Albert and he's like, it's awesome. He invented state space models, but it's not like state space models are like here.
But that's because state space models potentially have like a huge advantage in like video and image and audio, which is like stuff that China does more of and is further along and has better capabilities in. Right? So it's like there are already like – Because of all the surveillance cameras there.
Yeah, That's the quiet part out loud. Right.
But like, there's already divergence in like capabilities there. Right.
Like, you know, you look at image recognition, China like destroys American companies. Right.
On that. Right.
Because because the surveillance, you have like this divergence in tech tree and like people can like start to design different architectures within the constraints you're given. Yeah.
And everyone has constraints, but the constraints different companies have are even different, right? And so Google's constraints have shown them that they built a genuinely different architecture. But now if you look at Blackwell and then what's said about TPV6, I'm not going to say they're converging, but they are getting a little bit closer terms of like how big is the Matmol unit size and like some of the like topology and like world size of like the scale up versus scale out network.
Like there is some like convergence slightly, like not saying they're similar yet, but like already they're starting to, but then there's different architectures that people could go down and paths. So you see stuff like from all these startups that are trying to go down different tech trees because maybe that'll work.
But there's a self-fulfilling prophecy here too, right? All the research is in transformers that are very high arithmetic intensity because the hardware we have is very high arithmetic intensity and transformers run really well on GPUs and TPUs. And like you sort of have a self-fulfilling prophecy.
If all of a sudden you have an architecture, which is theoretically, it's way better, but you can get only like half of the like usable flops out of the out of your chip. It's worthless because even if it's 30 percent, you know, compute efficiency, when it took twice, it's half as fast on the chip.
Right. So there's all sorts of like tradeoffs and like self-fulfilling prophecies of what do what path do people go down? John and Dylan have talked a lot in this episode about how stupefyingly complex the global semiconductor supply chain is.
The only thing in the world that approaches this level of complexity is the Byzantine web of global payments. You're stitching together legacy tech stacks and regulations that differ in every jurisdiction.
In Japan, for example, a lot of people pay for online purchases by taking a code to their corner store and punching it into a kiosk. Stripe abstracts all this complexity away from businesses.
You can offer customers whatever payment experience they're most likely to use wherever they are in the world. And Stripe is how I invoice advertisers for this very podcast.
I doubt that they're punching in codes at a kiosk in Japan, but if they are, Stripe will handle it. Anyways, you can head to Stripe.com to learn more.
If you are made head of compute of a new AI lab, if SSI came to you, the Ilias Descobro new lab, and they're like, Dylan, we give you $1 billion. You are head of compute.
Help us get on the map. We're going to compete with the Frontier Labs.
What is your first step? Okay, so the constraints are you're a U.S. slash Israeli firm because that's what SSI is, right? And your researchers are in the U.S.
and Israel. You probably can't build data centers in Israel because power is expensive as hell.
And it's probably, like, risky maybe. I don't know.
So still in the U.S. most likely.
Most of the researchers are here, or a lot of them are in the U.S., right? Like Palo Alto or whatever. So I guess you need a significant chunk of compute.
Obviously, the whole pitch is you're going to make some research breakthrough that's like compute efficiency win, data efficiency win, whatever it is. You're going to make some breakthrough,

but you need compute to get there, right?

Because your GPUs per researcher

is your research velocity, right?

Obviously, like,

data centers are very tapped out, right?

Not in terms of tapped out,

but like every new data center

that's coming up,

most of them have been sold,

which has led people like Elon

to go through this like

insane thing in Memphis, right?

I'm just trying to like,

I'm just trying to,

I'm just trying to square the circle, yeah. On that question.
I kid you not in my group house, like group chat, they're like, there have been two separate people who have been like, I have a cluster of H100s and I have like a long lease on them, but I don't like, I'm trying to sell them off. Is it like a buyer's market right now? Cause it does seem like people are trying to get rid of them.
So I think like for the ILIA question, it was like a cluster of like 256 GPUs or even 4K GPUs. It's kind of cope, right? It's not enough, right? Yes, you're going to make compute efficiency wins, but with a billion dollars, you probably just want the biggest cluster in one individual spot.
And so like small amounts of GPUs, probably not like possible to use, right? use, right? Like for them, right? Like, and that's what most of the sales are, right? Like, you go and look at like GPU list or like Vast or like Foundry, like, or 100 different GPU resellers, the cluster sizes are small. Now, is it a buyer's market? Yeah, last year, you would buy H100s for like $4 or $3.
Like if you, you know. An hour.
An hour, right? For shorter term or midterm deals, right? Now it's like, if you want a six month deal, you could get like $2.15 or less, right? And like the natural cost, if I have a data center, right? And I'm paying like standard data center pricing to purchase the GPUs and deploy them is like $1.40. And then you add on the debt because I probably took debt to buy the GPUs or cost equity, cost of capital gets up to like $1.70 or something.
Right. Um, and so you see deals that are like the good deals, right? Like Microsoft renting from core weaver, like $1.90 to $2.
Right. So people are getting closer and closer to like, there's still a lot of profit, right? Cause the natural rate, even after debt and all this is like $1.70.
So like, there's still a lot of profit when people are selling in the low twos, like GPU companies, people are deploying them, but it is a buyer's market in a sense that it's gotten a lot cheaper, but cost of compute is going to continue to tank, right? Because it's like, sort of like, I don't remember the exact name of the law, but it's effectively Moore's law, right? Every two years, the cost of transistors halved, and yet the industry grew, right? Every six months or three months, the cost of intelligence, you know, like OpenAI and GBD, GBD4, what, February 2023, right? $120 per million tokens or something like that was roughly the cost, and now it's like 10, right? It's like the cost of intelligence is tanking partially because of compute, partially because the model's compute efficiency wins, right? I think that's a trend we'll see. And then that's going to drive adoption as you scale up and make it cheaper and scale up and make it cheaper.
Right, right, right. Anyways, what you're saying, if you're a head of computer SSI.
Okay, head of computer SSI. That was very intense.
There's obviously no free data center lunch, right? In terms of, you know, and you can just, you know, take that based on, like, the data we have shows that there's no free lunch per se. Like, immediately today you need to compute for a large cluster size or even six months out, right? There's some, but, like, not a huge amount because of what X did, right? xai is like oh shit where we're gonna go like we're gonna go um buy a memphis factory uh put

a bunch of like generators outside, like mobile generators usually reserved for like natural disasters, a Tesla battery pack, draw as much power as we can from the grid, tap the natural gas line that's going to the natural gas plant like two miles away, the gigawatt natural gas plant, like just like send it and like get a cluster built as fast as possible. Now you're running a hundred K GPS, right? And that costs, that costs about 5 billion, right? 4 billion, right? Not, not, not, not 1 billion.
So the scale that SSI has is much smaller by the way. Right.
So, so their size of cluster will be, you know, maybe one third or one fourth of the size, right? So now you're talking about 25 to 32 K cluster, right? There, you still don't have that, right? No one is willing to rent you a 32K cluster today, no matter how much money you have, right? Even if you had more than a billion dollars. So you now, it makes the most sense to build your own cluster one instead of renting it or get a very close relationship like a OpenAI Microsoft with CoreWeave or OpenAI Microsoft with Oracle slash Crusoe.
The next step is Bitcoin, right? So OpenAI has a data center in Texas, right? Or it's going to be their data center. It's like they kind of contracted all that.
CoreWeave, there is a 300 megawatt natural gas plant on site powering these crypto mining data centers from the company called Core Scientific. And so they're just converting that.
There's a lot of conversion, but like the power's already there, the power infrastructure is already there. So it's really about like converting it, getting it ready to be water cooled, all that sort of stuff, and convert it to 100,000 GB200 cluster.
And they have a number of those going up across the country. But that's also like tapped out to some extent, because N NVIDIA is doing the same thing in Plano, Texas for a 32,000 GPU cluster that they're building.
And so, NVIDIA is doing that? Well, they're going through partners, right? Because this is the other interesting thing is the big tech companies can't do crazy shit like Elon did. Why? ESG.
Oh, interesting. They can't just do crazy shit like...
Actually, do you expect Microsoft androsoft and google and whoever to like drop their net zero uh commitments as the scaling picture intensifies yeah yeah um so so so so like this this like this like or what xai is doing right is like it's not it's not that polluting you know on the scheme of things but it's like you have 14 mobile generators and you're just burning natural gas on site on these like mobile generators that sit on trucks, right? And then you have like power directly two miles down the road. There's no unequivocal way to say any of the power is because two miles down the road is a natural gas plant as well, right? There's no way to say this is like green.
You go to the core beef thing is a natural gas plant is literally on site from core scientific and all that, right? And then the data centers around it are horrendously inefficient, right? There's this metric called PUE, which is basically how much power is brought in versus how much gets delivered to the chips, right? And like the hyperscalers, because they're so efficient or whatever, right? Their PUE is like 1.1 or lower, right? I.e., if you get a gigawatt in, 900 megawatts or more gets delivered to chips, right? Not wasted on cooling and all these other things. This like core scientific one is going to be like 1.5, 1.6, i.e.
Even though I have 300 megawatts of generation on site, I only deliver like 180, 200 megawatts to the chips. Given how fast solar is getting cheaper and also the fact that like you know how the reason solar is difficult elsewhere

is like you know you're like you got to like power the homes at night um here i guess it's

like theoretically possible to like uh figure out you know only like run the clusters in the

in the day or something absolutely not that really that that's not possible because because it's so

expensive to have these gpus yeah so so like when you look at the power cost of a large cluster it's trivial in a,

and to some extent,

right?

Like,

um,

you know,

like the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the,

the, the, the, the, the, look at the power cost of a large cluster, it's trivial to some extent, right? The meme that, oh, you can't build a data center in Europe or East Asia because the power is expensive, that's not really relevant. Or power is so cheap in China and the US, that's where the only places you can build data centers.
That's not really the real reason. It's the ability to generate new power for these activities is why it's really difficult and the economic regulation around that.
But the real thing is like, if you look at the cost of ownership of a GP of an H-100, let's just say you gave me, you know, a billion dollars and I already have a data center. I already have all this stuff.
I'm paying regular rates for the data centers. I'm not paying through the nose or anything, paying regular rates for power, not paying through through the nose.
Power is sub-15% of the cost. And it's sub-10% of the cost, actually, right? The biggest, like, 75% to 80% of the cost is just the servers, right? And this is on, like, a multi-year, including debt financing, including cost of operation, all that, right? Like, when you do a TCO, total cost of ownership, like, it's like 80% of the GPUs, 10% of the data center, 10% of the power, rough numbers, right? So it's like kind of irrelevant, right? Whether or not you like, like how expensive the power is, right? Yeah.
You'd rather do what Taiwan does, right? When like power, like what do they do when there's droughts, right? They like force people to not shower. They basically reroute the power from, when there was a power shortage in Taiwan, they basically rerouted power from the residentials and this will happen in a capitalistic society as well most likely because like fuck you like why you're not going to pay x dollars per kilowatt hour because to me the marginal cost of power is irrelevant really it's all about the gpu cost and the ability to get the power um i don't want to turn it off eight hours a day maybe let's discuss what would happen if the training regime changes and if it doesn't change.
So, like, you could imagine that the training regime becomes much more parallelizable where it's, like, about, like, coming up with some sort of, like, search or – like, most of the compute for training is used to come up with synthetic data or do some kind of search. And that can happen across a wide area.
In that world, how fast could we – like, just go through the numbers on year after year. And then suppose it actually has to be, you would know more than me, but suppose it has to be the current regime.
And just explain what that would mean in terms of how distributed that would have to be. And then how plausible it is to get clusters of certain sizes over the next two years.
I think it is not too difficult for Ilya's company to get a cluster of like 32K of Blackwell next year. Forget about Ilya's company.
Let's talk about the Flera Flares. Like 2025, 2026, 2027.
2025, 2026, there's before I talk about the US, I think it's important to note that there's a gigawatt plus of data center capacity in Malaysia next year. Now that's like mostly by dance, but like there's like, you know, in power wise, there's like, there's the humongous damming of the Nile in Ethiopia and the country uses like one third of the power that that dam generates.
So there's like a ton of power there. How much power does that dam generate? Like it's, it's like over a gigawatt.
Um, and the country consumes like 400 megawatts or something

trivial and it's like are people bidding for that power i think people just don't think they can build a data center in fucking ethiopia why not i don't think the dam is filled yet is it no i mean they have to like the the dam could generate that power they just don't okay right like there's a little bit more equipment required but that's like not too hard um why don't they yeah um i think there's like, like true security risks if you're china or if you're the u.s lab like to build a fucking data center with all your ip in fucking ethiopia like you want agi to be in ethiopia like you want it to be that accessible like people you can't even monitor like like being the technicians in the fucking data center or whatever right or like powering the the data center all, all these things. Like there's so many like, you know, things you could do to like, you could just destroy every GPU in a data center if you want, if you just like fuck with the grid, right? Like pretty easily, I think.
People talk a lot about it in the Middle East. There's 100KGB200 cluster going up in the Middle East, right? And the US, like there's like clearly like stuff the US is doing, the, you know, um, G42 is the UAE data center company, cloud company.
Their CEO is a Chinese national or not a Chinese. He's Chinese, basically Chinese allegiance.
But, uh, open, I think open AI wanted to use the data center from them, but instead like the U S forced Microsoft to like, I feel like this is what happened is forced Microsoft to like do a deal with them. So that G42 has a 100k GPU cluster, but Microsoft is like administering and operating for security reasons, right? And there's like Omniva in like Kuwait, like the Kuwait, like super rich guy spending like five plus billion dollars on data centers, right? Like you just go down the list, like all these countries, Malaysia has, you know, you know, 10 plus billion dollars of like data center, you know, AI data center build outs over the next couple of years, right? Like, and you know, go to every country, it's like this, this stuff is happening.
But on the grand scheme of things, the vast majority of the compute is being built in the US, and then China, and then like Malaysia, Middle East, and like rest of the world. And if you're in there, you know, going back to your point, right, like you have synthetic data, you have like the search stuff, you have like you have all these post training techniques.
You have all this, you know, all this ways to soak up flops or you just figure out how to train across multiple data centers, which I think they have. At least Microsoft and OpenAI have figured it out.
What makes you think they figured it out? Their actions. So Microsoft has signed deals north of $10 billion with fiber companies to connect their data centers together.
There are some permits already filed to show people are digging, you know, between certain data centers. So we think with fairly high accuracy, we think that there's five data centers, massive, not just five data centers, sorry, five like regions that they're connecting together, which comprises of many data centers, right? What will be the total power usage of the...
Depends on the time, but easily north of a gigawatt, right? Which is like close to a million GPUs. Well, each GPU is getting more power, higher power consumption too, right? Like it's like, you know, the rule of thumb is like, GPU, H100 is like 700 watts, but then like total power per GPU all in is like 1200, 1300 watts, 1400 watts.
But next generation NVIDIA GPUs are, it's 1200 watts for the GPU, but then it actually ends up being like 2000 watts all in, right? Like, so there's a little bit of scaling of power per GPU, but like you already have 100k cluster, right? OpenAI in Arizona, XAI in Memphis, and many others already building 100K clusters of H100s. You have multiple, at least five, I believe, GB200, 100K clusters being built by Microsoft slash OpenAI slash their partners for them.
And then potentially even more, 500K GB200s, right, is a gigawatt, right? And that's like online next year, right? And like the year after that, if you aggregate all the data center sites and like how much power, and you only look at net ads since 2022 instead of like the total capacity at each data center, then you're still like north of multi-gigawatt, right? So they're spending 10 plus billion dollars on these fiber deals with a few fiber companies, Lumens, Ayo, like, you know, a couple other companies. And then they've got all these data centers that they're clearly building 100K clusters on, right? Like old crypto mining site with CoreWeave in Texas or like this Oracle, Crusoe in Texas and then like in Wisconsin and Arizona and, you know, a couple other places.
There's a lot of data centers being built up, you know, and providers, right? QTS and Cooper and like, you know, you go down the list, there's like so many different providers and self-build, right? Data centers I'm building myself. So, let's just like give the number on like, okay, 2025, Elon's cluster is going to be the big, like, it doesn't matter who it is.
So, then there's the definition game, right? Like, Elon claims he has the largest cluster at 100k GPUs because they're all fully connected. Rather than who it is, like, I just want to know, like, how many, like, I don't know if it's better to denominate and age 100s.
100,000 GPUs this year. Okay.
Right? For the biggest cluster. For the biggest cluster.
Next year? Next year, 300,000 to 500,000, depending on whether it's one side or many, right? 300,000 to, 700,000, I think, is the upper bound of that. But anyways, like, you know, it's about like when they tier it on, when they can connect them, when the fibers connect it together.
Anyways, 300,000 to like 500,000, let's say, but those GPUs are 2 to 3x faster, right? Versus the 100K cluster. So on an H100 equivalent basis, you're at a million chips next year.

In one cluster?

By the end of the year, yes.

No, no, no.

So one cluster is like the wishy-washy definition, right?

Multi-site, right?

Can you do multi-site?

What's the efficiency loss when you do multi-site?

Is it possible at all?

I truly believe so.

What's the efficiency loss is the question, right?

Okay, would it be like 20% loss, 50% this is where like you know this is where you need like the secrets right of like and anthropics got similar plans with amazon and you go down the list right like people and then and then the year after that the year after that is where 20 this is 2026 2026 there is a single gigawatt site level and that's just part of the like multiple sites right for? For Microsoft. The Microsoft five gigawatt thing happens in 20.
One gigawatt, one site in 2026. But then you have, you know, a number of others.
You have five different locations, each with multiple, some with multiple sites, some with single site. You're easily north of two, three gigawatts.
And then the question is, can you start using the old chips with the new chips? And like the scaling, I think,

is like you're going to continue

to see flop scaling

like much faster than people expect.

I think as long as the money

pours in, right?

Like that's the other thing is like

there's no fucking way

you can pay for the scale of clusters

that are being planned

to be built next year for OpenAI

unless they raise like 50 to 100 billion dollars,

which I think they will raise that

like end of this year, early next year. 50 to 100 billion? Yes.
Are think they will raise that like end of this year early next year 50 to 100 billion yes are you kidding me no oh my god this is like say you know like sam has a superpower no like it's like it's like recruiting and like raising money that's like what he's like a god at will ships themselves be a bottleneck to the scaling not in the near term uh it's it's more again back to the concentration versus decentralization point. Yeah, yeah.
Because, like, the largest cluster is 100,000 GPUs. NVIDIA is manufactured close to 6 million hoppers, right? Across last year and this year.
Right? So, like, what? That's fucking tiny, right? So then, why is Sam talking about a 7 trillion to build foundries and whatever? Well, this is, you know, like, draw the line, right? Like, log, log lines. Let's fuck.
Number goes up, right? You know, if you do, if you do that, right, like you're going from 100K to 300 to 500K, where the equivalent is a million, you just 10X year on year. Do that again, do that again or more, right? If you increase the pacing.
What is do that again? So like 2026, the number of H100 equivalents. If you increase the globally produced flops by like 30x year-on-year or 10x year-on-year and the cluster size grows or the cluster size grows by, you know, 3 to 5 to 7x and then you get multi-site going better and better and better, you can get to the point where multi-million chip clusters, even if they're like regionally not connected right next to each other, are right there.
And in terms of flops, it would be 1e what? 1e28? 29? I think 1e30 is very possible, like 28, 29. Wow.
And 1e30 you said by 28, 29? Yeah. And so that is literally six orders of magnitude.
That's like 100,000 times more compute than GPT-4. The other thing to say is like the way you count flops on a training run is really stupid.
Like you can't just do like active parameters times tokens times six, right? Like that's really dumb because like the paradigm, as you mentioned, right? And you've had many great podcasts on this, like synthetic data and like RL stuff, post-training, like verifying data and like all these things generating and throwing it away like all sorts of stuff search like inference time compute all these things like aren't counted in the training flops so you can't like say 1 8 30 is a really stupid number to say because by then the you know the actual flops of the pre-training may be x but the data to generate the the for the pre-training may be you know way bigger or like the the search inference time may be way, way bigger, right? Right. But also, because you're doing the sort of adversarial synthetic data where the thing you're weak is that you can make synthetic data for that, it might be way more sample efficient.
So even though coming up with this... The pre-training flops will be irrelevant, right? I actually don't think pre-training flops will be 1E30.
I think more it'll be like the total sum nation of the flops that you deliver to the model right across pre-training uh post-training um synthetic data for that pre-training data and post-training data as well as like some of the inference time compute efficiencies like could be like it's more like 1e30 right so suppose you you really do get to the world where like it's worth investing. Okay.
Actually, if you're doing money 30, how is that like a trillion dollar cluster, a hundred billion dollar cluster? Like, I think it'll be like multi hundred billion dollars. And then, but then like, it'll be like, I like truly believe people are going to be able to use their prior generation clusters and alongside their new generation clusters.
And obviously like, know smaller batch sizes or whatever right like or use that to generate and verify data all these sorts of things and then for 2030 um right now i think five percent of uh tsmc's n5 is nvidia or like whatever percent it is by by 2028 what percentage will it be um again this is like a question of like how scale-pilled you are and how much money will flow into this and how you think progress works. Like, will models continue to get better or does the line slope over? I believe it'll continue to skyrocket in terms of capabilities.
In that world. In that world, why wouldn't, like, not a 5 nanometer, but like of two nanometer, a 16, a 14.
These are the nodes

that'll be in that timeframe of 2028 used for AI. I could see like 60, 70, 80% of it.
Like, yeah, no problem. Given the fabs that are like currently planned and are currently being built, that is, is that enough for the one E30 or will, will be good? So, so then like the chip code doesn't make any sense.
Cause like the chip code stuff about like, we don't make any sense. So no, I think like the plans of TSMC on two nanometer and such are like quite aggressive for a reason, right? Like to be clear, Apple, which has been TSMC's largest customer, does not need how much two nanometer capacity they're building.
They will not need A16. They will not need A14, right? Like you go down the list.
It's like Apple doesn't need this shit, right? Um, although they did just hire one of Google's head of system design for a TPU. Um, but you know, so they are going to make an accelerator, but you know, that's besides the point, um, an AI accelerator, but that's besides the point, like Apple doesn't need this for their business, which they have been 25% or so of TSMC's business for a long time.
And when you, when you just zone in on just the leading edge, they've been like more than half of the newest node or 100% of the newest node almost constantly. That paradigm goes away, right? If you believe in scaling and you believe in like the models get better, the new models will generate, you know, infinite, not infinite, but like amazing productivity gains for the world and so on and so forth.
And if you believe you believe in that world then like tsmc needs to act accordingly and the amount of silicon that gets delivered needs to be there so 25 26 tsmc is like definitely there and then on a longer time scale the um industry industry can be ready for it but it's it's going to be a constant game of like you must convince them constantly that they must do this it's not like a simple game of like, oh, you know, if people work silently, it's not going to happen. Right? Like they have to see the demonstrated growth over and over and over and over again across the industry.
And markets. You want to see it, investors or companies or who? More so like TSMC needs to see NVIDIA volumes continue to grow straight up, right? And,, and Google's volumes continue to grow straight up and, you know, go down the list.
Chips in the near term, right, next year, for example, are less of a constraint than data centers, right? And likewise for 2026. The question for 27, 28 is like, you know, always when you grow super rapidly, like people want to say that's the one bottleneck because that's the convenient thing to say.

And in 2023, there was a convenient bottleneck, COOS, right? The pictures got much, much like cloudier, not cloudier, but we can see that like, you know, HPM is a limiter too. COOS is as well, COOS L especially, right? data centers, transformers, substations,

like power generation,

batteries, like UPSs,

like CRHs, like water cooling stuff. Like all of this stuff is now limitations next year and the year after.
FABs are in 26, 27, right? Like, you know, things will get like cloudy because like the moment you unlock one, oh, like only 10% higher, the next one is the thing. And only 20% higher, the next one is the thing.
So today like data data centers are like four to five percent of total U.S. Of total U.S.
When you think about like as a percentage of U.S. power, that's not that much.
But when you think U.S. power has been like this and now you're like this.
But then you also flip side, you're like, oh, all this coal has been curtailed. All these like, oh, there's so many like different things.
So like power is not that crazy on a like on a national basis, on a localized basis. It is because it's about the delivery of it.
Same with the substation transformer supply chains, right? It's like these companies have operated in an environment where the US power is like this or even slightly down, right? And it's like kind of been like that because of efficiency gains, because of, you know. So anyways, like there have been humongous like weakening of the industry.
But now all of a sudden, if you tell that industry, your business will triple next year if you can produce more. Oh, but I can only produce 50% more.
Okay, fine. You're after that.
Now we can produce three X as much, right? You do that to the industry, the U S industrial base, as well as the Japanese, as well as like, you know, all across the world can get revitalized much faster than people realize, right? Like, I truly believe that people can innovate when given the like need to. It's one thing if it's like this is a shitty industry where my margins are low and we're not growing really and like, you know, blah, blah, blah, blah, blah, to all of a sudden, oh, this is the sex, I'm in power and I'm like, this is the sexiest time to be alive and like we're going to, we're going to do all these different plans and projects and people have all this demand.
And they're like begging me for another percent of efficiency advantage because that gives them another percent to deliver to the chips. Like all these things or 10% or whatever it is, like you see all these things happen and innovation is unlocked.
And, um, you know, you also bring in like AI tools, you bring in like all these things, innovation will be unlocked. Production capacity can grow not overnight, but it will on six months, 18 months, three-year time scales.
It will grow rapidly. Um, and you see the revitalization of these industries.
So, but I think like getting people to understand that, getting people to believe, because, you know, if we pivot to like, you know, I'm telling you that Sam's going to raise 50 to a hundred billion dollars because he's telling people he's going to raise this much, right? Like literally having discussions with sovereigns and like you, Saudi Arabia and like the Canadian pension fund and like, you know, not these specific people, but like the biggest investors in the world. And of course, Microsoft as well, but like, he's literally having these discussions because they're going to drop their next model or they're going to show it off to people and raise that money because this is their plan.
If these sites are already planned and like they've already... The money's not there, right? So how do you plan a site without...
Today, Microsoft is taking on immense credit risk, right? Like they've signed these deals with all these companies to do this stuff, but Microsoft doesn't have... I mean, they could pay for it, right? Microsoft could pay for it on the current timescale, right? Oh, what's, you know, their CapEx going from $50 billion to $80 billion direct CapEx

and then another $20 billion across, like, Oracle, CoreWeb, you know,

and then, like, another, like, $10 billion across their data center partners.

They can afford that, right, to next year, right?

But then that doesn't, you know, like, this is because Microsoft truly believes in open AI.

They may have doubts like, holy shit, we're taking a lot of credit risk. You know, obviously they have to message Wall Street, all these things, but they are not like, that's like affordable for them because they believe they're a great partner to OpenAI that they'll take on all this credit risk.
Now, obviously OpenAI has to deliver. They have to make the next model, right? That's way better.
And they also have to raise the money. And I think they will, right? believe from how amazing 4.0 is, how small it is relative to 4.0, the cost of it is so insanely cheap.
It's much cheaper than the API prices lead you to believe. And you're like, oh, what if you just make a big one? It's very clear what's going to happen to me on the next jump.
That they can then raise this money and they can raise this capital from the world. This is intense.
It's very intense john if actually if he's right or i don't know if not him but like in general if we're uh if like the capabilities are there the revenue is there revenue doesn't matter revenue matters is there any part of that picture that still seems wrong to you in terms of like displacing so much of tsmc production uh wafers and like uh power and so forth does any part of that seem wrong to you I can only speak to the semiconductor part even though I'm not an expert but I think the thing is like TSMC can do it like they'll do it I just wonder though he's right in that in a sense of 24 25 that's covered yeah but 26 27 that's that secret point where you have to say can the semiconductor industry and the rest of the industry be convinced that this is where the money is like where's money is like and that means is there money is there money by 24 25 how much how much revenue do you think the ai industry as a whole needs by 25 in order to keep scaling doesn't matter compared to smartphones compared to smartphone but i know he says it doesn't matter i'll get to a lie you keep i know

hey what is smartphones like apple's revenue is like 200 something billion dollars so like yeah it needs to be another smartphone size opportunity right like even the smartphone industry didn't drive this sort of growth like it's kind of crazy don't you think so today so far right only the only thing i can really perceive yeah girlfriend but like but you know what i mean it's it's not I want a real one, damn it.

So, like, a few things, right? The return on invested capital for all of the big tech firms is up since 2022. Yeah.
And therefore, it's clear as day that them investing in AI has been fruitful so far, right? Wait, wait. For the big tech firms.
Return on invested capital. Like financially, you look at Meta's, you look at Microsoft's, you look at Amazon's, you look at Google's.
The return on invested capital is up since 2022. On AI in particular? No, just generally as a company.
Now, obviously there's other factors here. Like what is Meta's ad efficiency? How much of that is AI, right? Super messy.
That's a super messy thing. But here's the other thing.
This pascal's wager right this is a matrix of like do you believe in god yes or no if you believe in god yes or no like hell or heaven right so if you believe in god and god's real and you go to heaven that's great that's fine whatever if you don't believe in god and god is real then you're going to hell this is the deep technical analysis you'll subscribe to send me out this for. I think this is just me ripping.
Can you imagine what happens to the stock

if Satya starts talking about Pascal's wager? No, no, but this is psychologically what's happening, right? This is a, if I don't, and Satya said it on his earnings call, the risk of underinvesting is worse than the risk of overinvesting. He has said this word for word, this is Pascal's wager.
This is, I must believe I am AGI-pilled because if I'm not and my competitor does it, I'm absolutely fucked. Oh, okay.
Other than Zuck, who seems pretty convinced. Sundar said this on the earnings call.
So Zuck said it. Sundar said it.
Satya's actions on credit risk for Microsoft do it. He's very good at PR and like messaging.
So he hasn't like said it so openly, right? Sam believes it. Dario believes that you look across these tech titans, they believe it.
And then you look at the capital holders. The UAE believes it.
Saudi believes it. Blackstone believes it.
Like all these major companies and capital holders also believe it because they're putting their money here. But that's like, how can like, it won't last.
It can't last unless there's money coming in somewhere correct correct but then the question is the the simple truth is like gpd4 costs like 500 million dollars to train i agree and it has generated billions in reoccurring revenue but in that meantime opening i raised 10 billion dollars or 13 billion dollars and is building a you know a model that costs that much effectively right right um and and so then obviously they're not making money so what happens when they do it again they release and show gpd5 with whatever capabilities that make everyone in the world like holy fuck obviously the revenue takes time after you release the model to show up you still have only a few billion dollars or you know five billion dollars of revenue run rate you50 to $100 billion because everyone sees this like, holy fuck, this is going to generate tens of billions of revenue. But that tens of billions takes time to flow in, right? It's not an immediate click.
But the time where Sam can convince, and not just Sam, but like people's decisions to spend the money are being made are then, right? Like, so therefore, like you look at the data centers people are building, you don't have to spend most of the money to build the data center. Most of the money is the chips, but you're already committed to like, oh, I'm just going to have so much data center capacity by 2027 or 2026 that it's, I'm never going to need to build a data center again for like three, four, five years if AI is not real, right? That's like basically what all their actions are.
Or I can spend over $100 billion on chips in 26. And I can spend over $100 billion on chips in 27.
All right? So these are the actions people are doing. And the lag on revenue versus when you spend the money or raise the money, spend the money, there's like a lag on this.
So this is like, you don't necessarily need the revenue in 2025 to support this. You don't need the revenue in 2026 to support this.
You need the revenue in 25, 26 to support the $10 billion that OpenAI spent in 23, or Microsoft spent in 23 slash early 24 to build the cluster, which then they trained the model in mid 20, you know, for early 24, mid 24, which they then released at the end of 24, which then started generating revenue in 25, 26. I mean, like, the only thing I can say is that you look at a chart with three points on a graph,

GPT-1, 2, 3,

and then you're like...

And even that graph is like,

the investment you have to make

in GPT-4 over GPT-3 is 100x.

The investment you have to make

in GPT-5 over GPT-4 is 100x.

Like, so, revenue,

like, currently the ROI

could be positive,

but like,

and this very well could be true,

I think it will be true,

but like,

the revenue has to, like, increase exponentially, not just, like, you know, 10%. Yeah, of course, of course.
I agree with you, but I also, I agree with Dylan that it can be achieved. ROI, like, Semiconductor, TSMC does this.
Invest $16 billion, it expects a ROI does that, right? That's, I understand that. That's fine.
Lag, all that. The thing that I don't expect

is that GPT-5 is not here.

It's all dependent on GPT-5 being good.

If GPT-5 sucks,

if GPT-5 looks like

it doesn't blow people's socks off,

this is all void.

What kind of socks are you wearing, bro?

Show them.

Show them.

Show them.

Show them.

AWS.

GPT-5 is not here. It's late.
We don't't know I don't think it's late I think it's late I want to zoom out and like go back to the end of the decade picture again so if you're if this picture we've already lost John we've already accepted GPD5 would be good hello but yeah you got it yeah. Bro, like, life is so much more fun when you just, like, are delusionally, like, you know? We're just ripping bong hits, are we? When you feel the AGI, you feel your soul.
This is why I don't live in San Francisco. I have tremendous belief in, like, GBD5.
Why? Area. Because, like, what we've seen already.
I think the public signs all show that this is like very much the case, right? What we see with beyond that is more questionable. And I'm not sure because I don't know what I don't know, right? Like, I don't know.
We'll see how like how much they progress. But if like things continue to improve, life continues to radically get reshaped for, you know It's also like every time you increment up the intelligence, the amount of usage of it grows hugely.
Every time you increment the cost down of that amount of intelligence, the amount of usage increases massively. As you continue to push that curve out, that's what really matters, right? And it doesn't need to be today.
It doesn't need to be a revenue versus like how much CapEx in any time in the next few years. It just needs to be did that last humongous chunk of CapEx make sense for OpenAI or whoever the leader was or and then how does that flow through, right? Or were they able to convince enough people that they need to they can raise this much money, right? Like you think Elon's tapped out of his network with raising $6 billion? No.
XAI is going to be able to raise 30 plus, right? Easily, right? I think so. You think Sam's tapped out? You think Anthropix tapped out? Anthropix barely even diluted the company relatively, right? Like, you know, there's a lot of capital to be raised in just from like, call it FOMO if you want.
But like, during the dot-com bubble, people were spending, the private industry flew through like $150 billion a year. We're nowhere close to that yet, right? We're not even close to the dot-com bubble, right? Why would this bubble not be bigger, right? And if you go back to the prior bubbles, PC bubble, semiconductor bubble, mechatronics bubble throughout the US, each bubble was smaller.
I don't know, you know, you call it a bubble or not, why wouldn't this one be bigger? How many billions of dollars a year is this bubble right now? For private capital? Yeah. It's like 55, 60 billion so far for this year.
It can go much higher, right? And I think it will next year. Okay, so let me think of...
You need another bong rip. You know, at least like finishing up and looping into the next question was like you know prior bubbles also didn't have the most profitable companies that humanity's ever created investing and they were debt financed this is not debt financed yet right so that's the last like little point on that one whereas the 90s bubble was like very debt financed this is like disastrous for those companies yeah sure but it was much was built, right? You know, you got to blow a bubble to get real stuff to be built.
It is an interesting analogy where like with, even though the dot-com bubble obviously burst and like a lot of companies went bankrupt, they in fact did lay out the infrastructure that enabled the web and everything. So you could imagine in AI, it's like some, a lot of the foundation, a lot of companies or whatever, like a bunch of companies bunch of companies will go bankrupt but they will enable the singularity.
During the 1990s, at the turn of the 1990s, there was an immense amount of money invested in MEMS and optical technologies because everyone expected the fiber bubble to continue. That all ended in 2003.
2002. It started in 1994? It hasn't been a revitalization since.
You that's you could risk the possibility of a Lumen one of the companies that's doing the fiber build out for Microsoft the stock like fucking forexed last month or this month and then how's it done from 2002 to 2024 oh no horrible horrible but like we're gonna rip baby you could rip that bomb baby you could freeze AI for another two decades sure, possible. Or people can see a badass demo from GPT-5, slight release, raise a fuckload of money.
It could even be like a Devon-like demo, right, where it's like complete bullshit, but like it's fine, right? Like, shit, I should. Edit that out, edit that out.
No, it's fine, it's fine, dude. I don't really care.
You know, the capital is going to flow in, right? Now, whether it deflates or not is like an irrelevant concern on the near term because you operate in a world where it is happening. And being – what is the Warren Buffett quote, which is like you can be – I don't even know if it's Warren Buffett.
You don't know who's going to be naked until the tide goes out? No, no, no. The one about like the market is delusional far longer than you can remain solvent or something like that.
That's not Buffett. That's not Buffett? Yeah, yeah.
That's John Maynard Keynes. Oh, shit.
That's that old? Yeah. Okay.
Okay. So Keynes said it, right? It's like you can be, yeah.
So this is the world you're operating in. Like it doesn't matter, right? Like what exactly happens.
There will be ebbs and flows, but like that's the world you're operating in. Like, it doesn't matter, right? Like, what exactly happens? There will be ebbs and flows, but, like, that's the world you're operating in.
I reckon that if an AI bubble pops, each one of these CEOs lose their jobs. Sure.
Or if you don't invest and you lose, it's Pascalian Wager, and that's much worse. Across decades, the largest company at the end of each decade, like the largest companies, that list changes a lot.
And these companies are the most profitable companies ever. Are they going to let that list, are they going to let themselves like lose it? Or are they going to go for it? They have one shot, one opportunity, you know, to make themselves into, you know, the whole M&N song, right? I want to hear like the story of how both of you started your businesses or're like the thing you're doing now um john like how like what how did it begin what were you doing but when you started the textile company oh my god no way please please wait you have it no is he joking i guess if he doesn't want to we'll talk about it later okay sure i think like i used to i mean the story's famous i've told it a million times.
It's like Asianometry started off as a tourist channel. Yeah.
So I would go around kind of like, I moved to Taiwan for work and then- Doing what? I was working in cameras. And then like I told- What was the other company you started? It tells too much about me.
Oh, come on. like i worked in i worked in cameras and then basically i went to japan with my mom and mom was like hey you know what are you doing in taiwan i don't know what you're doing i was like all right mom i will go back to taiwan and i'll make stuff for you and i made videos i would like go to the cheng kai-shek park and be like hi mom this park Eventually, at some point, you run out of stuff.
But then it's like a pretty smooth transition from that into, like, you know, history of Chinese history, Taiwanese history. And then people started calling me Chinanometry.
I didn't like that. So I moved to other parts of Asia.
And now, like, and then... So what year did you, like, start for...
Like, what year was, like, people started watching your videos, let's say, like, 1,000 views views per video or something. Oh my gosh, that was not.
I started the channel in 2017 and it wasn't until like 2018 that 2019 that actually I labored on for like three years. First three years with like no one watching.
Like I had got like 200 views and I'd be like, oh, this is great. And then were you were the videos basically like the ones you have? But sorry, backing up for the audience who might not, I imagine basically everybody knows Asianometry, but if you don't, like the most popular channel about semiconductors, Asian business history, business history in general, even like geopolitics history and so forth.
and yeah I mean it's like honestly I've done like research for like different AI guests and different like whatever thing I'm trying to

understand like how does hardware work

how does hardware work? How does AI work? It's like this is like my. How does a zipper work? Did you watch that video? No, I haven't watched that one.
It was like, I think it was a span of three videos. It was like Russian oil industry in the 1980s and how it like funded everything.
And then when it collapsed, they were absolutely fucked. Yeah.
And then it was like the next video was like the zipper monopoly in Japan. The next video was about ASMR.
Not a monopoly. Strong holding in a mid-tier size.
There's like the luxury zipper makers. Asianometry is always just kind of like stuff I'm interested in.
And I'm like interested in a whole bunch of different stuff. And I like, and the channel, for some reason, people started watching the stuff I do and I still have no idea why uh to be honest i still feel like it's i still feel like a fraud i sit in front of

like dylan and he's i feel like a fraud legit fraud especially when he starts talking about

60 000 wafers and all that i'm just like i feel like i should be no i should know this but like

you know in the end it's yeah but but that you know i just try my best to kind of bring interesting

stories out how do you make a video every single week because these are like two a week

Thank you. In the end, it's, yeah.
But, you know, I just try my best to kind of bring interesting stories out. How do you make a video every single week? Because these are like.
Two a week. You know how long he had a full-time job? Five years, six years.
Oh, sorry, a textile business. And a, yes.
And a full-time job. Wait, no.
Full-time job, textile business, and Asianometry until like for a long, long time. I literally just gave up the textile business this year.
And like, how are you doing research and doing like making a video and like twice a week? I don't know. I like do these fucking, I'm like fucking talking.
This is all I do. And I like do these like once every two weeks.
See, the difference is, Dwarkesh, you go to SF Bay Area parties constantly. And Dwarkesh is, I mean, John is like locked in.
He's like locked in 24-7. He's got like the GSMC work ethic and I've got like the Intel work ethic.
If I don't, I got the Huawei ethic. If I do not finish this video, my family will be pillaged.
He actually gets really stressed about it, I think. Like not doing something like on his schedule.
It's very much like, I do two videos per week. I write them both simultaneously.
And how are you scouting out future topics you want to do you just pick up random articles, books, whatever and then if you find it interesting you make a video about it sometimes what I'll do is I'll google a country I'll google an industry and I'll google what a country is exporting now and what it used to export and I compare that and I say that's my video I'll be like, but then sometimes also just as simple as like, I should do a video about YKK. And then it's also just, but then it's also just as simple as- Zipper is nice, I should do a video about it.
I do, I do. It literally is.
Do you like keep a list of like, here's the next one, here's the one after that? I have a long list of like ideas. Sometimes it's as vague as like Japanese whiskey.
No idea what Japanese whiskey is about. I heard about it before.
I watched that movie. And then so I was just like, okay, I should do a video about that.
And then eventually, you know, you get to move that. How many research topics do you have in the back burner, basically? Like you're like, I'm kind of reading about it constantly.
And then like in a month or so, I'll make a video about it. I just finished a video about how IBM lost the PC.
Yeah. So right now I'm unstressing about that.
But then I'll kind of move right on to like the videos do kind of lead into others. Like right now, this one is about IBM PC, how IBM lost the PC.
Now it's next is how Compaq collapsed, how the wave destroyed Compaq. So technically, I'll do that.
At the same time, I'm dual lining a video about qubits. I'm dual lining a video about the directed self-assembly for semiconductor manufacturing, which I'll read a lot of Dylan's work for.
But then like, like a lot of that is kind of like, it's just, it's in the back of my head. And I'm like, producing it as I as I go.
Dylan, how do you work? How does one go from Reddit shit poster to like running a, like a semiconductor research and consulting firm? Yes. Let's start with the shit posting.
It's a long line, right? Like, so immigrant parents grew up in rural Georgia. So when I was eight, I begged for a seven, I begged for an Xbox.
And when I was eight, I got it. 360, right?

They had a manufacturing defect called the Red Ring of Death.

There are a variety of fixes.

I tried them, like putting a wet towel around the Xbox, something called the Penny Trick.

Those all didn't work.

My Xbox still didn't work.

My cousin was coming the next weekend.

And like, you know, he's like two years older than me.

I look up to him.

He's like in between my brother and I.

But I'm like, oh, no, no, we're friends. You know, you don't like my brother as much as you like me.
My brother's more like jockey types. I didn't matter.
Um, so like he didn't really care that I broke that the Xbox is broken. He's like, you better fix it though.
Right. Otherwise parents will be pissed.
So I figure out how to fix it online. It ends up, you know, I tried a variety of fixes, ended up shorting the temperature sensor.
Um, and that worked for long enough until Microsoft did the recall, right? But in that, you know, I learned how to do it out of necessity on the forums. I was a nerdy kid, so I liked games, but whatever.
But then like, there was no other outlet once I was like, holy shit, this is Pandora's box. Like what just got opened up? So then I just shit posted on the forums constantly, right? And, you know, for many, many years.
And then I ended up like moderating all sorts of reddits when I was like a tween teenager. Um, and then like, you know, as soon as I started making money, you know, you know, grew up in a family business, but didn't get paid for working.
Right. Of course, like yourself.
Right. But like, as soon as I started making money at like, and like, I got my internship and I like internships, I was like 18, 19.
Right. I started making money.
I started investing in semiconductors. Right.
Like I was like, of course, this is shit I like, right? You know, everything from like, and by the way, like the whole way through, like as technology progressed, especially mobile, right? It goes from like very shitty chips and phones to like very advanced. Every generation, they'd add something and I'd like read every comment.
I'd read every technical post about it and also all the history around that technology and then like, like, you know, who's in the supply chain? And it just kept building and building and building. Went to college, did data science-y type stuff.
Went to work on like hurricane, earthquake, wildfire simulation and stuff for a financial company. But before that, like, but during college, I was still like, I wasn't posting on the internet as much.
I was still posting some, but I was like following the stocks and all these sorts of sorts of things the supply chain all the way from like the tool equipment companies uh and the reason i like like those is because like oh this technology oh it's made by them you know you kind of do you have like friends in person who are into this shit or was it just online i made friends on the internet right that's dangerous i've only ever had like literally one bad experience and that was just because he's drugged out right like a one bad experience online or like meeting someone from the internet in person everyone else has been genuine like you you have enough filtering before that point you're like you know even if they're like hyper mega like autistic it's cool right like i am too right you know no i'm just kidding um but like you know you go through like the um you know layers and you look at the economic angle. You look at the technical angle.
You read a bunch of books just out of like, you know, you can just buy engineering textbooks, right? And read them, right? Like what's stopping you, right? And if you bang your head against the wall, you learn it, right? And then while you were doing this, was there like, did you expect to work on this at some point or was it just like pure interest? No, it was like, it was like obsessive hobby of many years and it pivoted all around, right? Like at some point I really liked gaming and then I got moved into like, I really liked phones and like rooting them and like underclocking them and the chips there and like screens and cameras. And then back to like gaming and then to like data center stuff.
Like, cause that was like where the most advanced stuff was happening. So it was like, I liked all sorts of like telecom stuff for a little bit.
Like it was like, it like bounced all around, but generally in like computing hardware, right? And I did data science, you know, you could, I said I did AI when I interviewed, but like, you know, it was like bullshit, multivariable regression, whatever, right? It was simulations of hurricanes, earthquakes, wildfire for like financial reasons right? Anyways, I moved up to like, you know, I had a job for three years after college and I was posting and like whatever. I had a blog, anonymous blog for a long time.
I'd even made like some YouTube videos and stuff. Most of that stuff is scrubbed off the internet, including Internet Archive because I asked them to remove it.
But like in 2020, I like quite quit my job and like started shit posting more seriously on the internet. I moved out of my apartment and started traveling through the US and I went to all the national parks, like in my truck slash like tent slash, you know, also stayed in hotels and motels like three, four days a week.
But I'd like I started posting more frequently on the internet. Um, and I'd already had like some small consulting arrangements in the past, uh, but it really started to pick up in mid 2020, like consulting arrangements from the internet, from my persona.
Like what kinds of people, investors, hardware companies? Like, um, there were like, it was like, it was like people who weren't in hardware that wanted to know about hardware. It would be like some investors, right.
Some couple of V some public market folks um you know there was times where like companies would ask about like three layers up in the stack like me because they saw me write some random posts and like hey like can we and blah blah blah there's all sorts of like random it was really small money um and then in 2020 like it really picked up and i just like i was like why don't i just arbitrarily make the price way higher? And it worked. And then I started posting.
I made it a new I made a newsletter as well. And I kept posting.
Quality kept getting better. Right.
Because people read it. They're like, this is fucking retarded.
Like, you know, there's what's actually right. Or, you know, like, you know, over over more than a decade.
Right. And then in 2021, towards the end, I made a paid post because someone didn't pay and like, you know, for a report or whatever, right? Ended up, that ended up doing like, I went to sleep that night.
It was about, it was about photoresist and like the developments in that industry, which is the stuff you put on top of the wafer before you put in the litho tool, lithography tool. Did great, right? Like I woke up the next day and I had like 40 paid subscriptions.
I was like, what? Okay, let's keep going, right? And let's post more paid sort of like partially free, partially paid, did like all sorts of stuff on like advanced packaging and chips and data center stuff and like AI chips, like all sorts of stuff, right? That I like was interested in and thought was interesting. And like I always bridged economically because I read all the company's earnings for like, you know, since I was 18, I'm 28 now, right? You know, all the way through to like, you know, all the technical stuff that I could.
2022, I also started to just go to every conference I could, right? So I go to like 40 conferences a year, not like trade show type conferences, but like technical conferences, like chip architecture, photo resist, you know, AI nirps, right? Like, you know, ICML. How many conferences do you go to a year? Like 40.
So you like live at conferences. Yes.
Yeah. I mean, I've been a digital nomad since 2020, and I've basically stopped and I moved to SF now, right? But like kind of, kind of, not really.
You can't say that. The California government.
No, no, I don't live at SF, come on. But I basically do now, right Internal Revenue Service.
Do not joke about this, guys. Like, do not seriously joke about this.
They're going to send you a clip of this podcast. Be like, 40% please.
I am in San Francisco, like, sub four months a year contiguously. Exactly 100 and whatever days.
Exactly 179 days. Let's go, right? right like you know um over the full course of the year but no like you know go to every conference make connections at all these like very technical things like international electron device manufacturing oh lithography and advanced patterning oh like uh very large scale integration like you know um all you know circuits conference you just single layer of the stack.
It's so siloed. There's tens of millions of people that work in this industry, but if you go to every single one, you try and understand the presentations, you do the required reading, you look at the economics of it.
You like are just curious and want to learn. You like, you can start to build up like more and more and the content got better.
And like, you know, what I followed got better. And then like started hiring people in 2020 and early 2022 as well.
Or might have been, yeah, yeah. Like mid, mid 2022 started hiring, got people in different layers of the stack.
But now today, like you fast forward now today, right? Like almost every hyperscaler is a customer, not for the newsletter, but for like data we sell, right? You know most, many major semiconductor companies, many investors, right? Like all these people are like customers of the data and stuff we sell. And the company has people all the way from like X-Symer, X-ASML, all the way to like X like Microsoft and like an AI company, right? Like, you know, like, and then through the stratification, you know, now there's 14 people here and like the, the company and, like, all across the U.S., Japan, Taiwan, Singapore, France, U.S., of course, right?

Like, you know, all over the world and across many ranges of, like – and hedge funds as well, right?

Ex-hedge funds as well, right?

So you kind of have, like, this amalgamation of, like, you know, tech and finance expertise.

And we just do the best work there, I think, right?

Are you still talking about a monstrosity? An unholy concoction. So like when we saw, you know, we have data analysis, consulting, etc.
For anyone who like really wants to like get deeper into this, right? Like we can talk about like, oh, people are building big data centers. But like how many chips is being made in every quarter of what kind for each company? What are the subcomponents of these chips? What are the subcomponents of the servers? We try and track all of that.
Follow every server manufacturer, every component manufacturer, every cable manufacturer, just all the way down the stack, tool manufacturer. And know how much is being sold where and how and where things are and project out, right?

All the way out to like, hey, where's every single data center?

What is the pace that it's being built out?

This is like the sort of data we want to have and sell.

And, you know, it's the validation is that hyperscalers purchase it and they like it

a lot, right?

And like AI companies do and like semiconductor companies do.

So I think that's the sort of like how it got there to where it is, is just like try and do the best, right? And try and be the best. If you were an entrepreneur who's like, I want to get involved in the hardware chain somewhere.
Like what is, like, what is, if you could start a business today somewhere in the stack, what would you pick? John, tell them about your textile business. I think I'd work in memory.
Something in memory. Because I think if this concept is there, you have to hold immense amounts of memory.
Immense amounts of memory. And I think memory already is tapped technologically.
HBM exists because of limitations in DRAM. I said it correctly.
I think like it's fundamentally, we've forgotten it because it is a commodity, but we shouldn't. I think it's breaking memory is going to, could change the world in that scenario.
I think the context here is that Moore's Law was predicted in 1965. Intel was founded in 68 and released their first memory chips in 69 and 70.

And so Moore's Law was,

a lot of it was about memory.

And the memory industry followed Moore's Law

up until 2012, where it stopped, right?

And it became very incremental gains since then,

whereas logic has continued

and like people are like,

oh, it's dying, it's slowing down.

At least there's still a little bit of like,

you know, coming, right?

You know, still more than 10%, 15% a year CAGR, right. Of growth and density slash cost improvement.
Memory is like literally like been like since 2012, like really bad. Uh, so, and, and when you think about the cost of memory, you know, it's been, it's been considered a commodity, but memory integration with accelerators, like this is like something that I don't know if you can be an entrepreneur here though.
That's the real challenge is because you have to manufacture at some really absurdly large scale or design something which in an industry that does not allow you to make custom memory devices or use materials that don't work that way. So there's a lot of like work there that I don't, so I don't necessarily agree with you, but I do agree.
It's like one of the most important things for people to invest in. You know, I think there's, it's, it's really about where is your, where are you good at and where can you vibe and where can you like enjoy your work and be productive in society, right? Because there are a thousand different layers of the abstraction stack.
Where can you make it more efficient? Where can you utilize AI to build better and make everything more efficient in the world and produce more bounty and like iterate feedback loop. Right.
And there is more opportunity to today than any other time in human history in my view. Right.
And so like, just go out there and try, right. Like what, what engages you? Because if you're interested in it, you'll work harder.
Right. If you were like, have a passion for copper wires, I promise to God, if you make the best copper wires, you'll make a shitload of money.
And if you have a passion for like B2B SaaS, I promise to God, you'll make fuckloads of money.

Right.

I don't, I don't like B2B SaaS, but whatever.

Right.

It's like, whatever, um, you know, whatever you have a passion for, like just work your

off, try and innovate, bring AI into it.

Um, and, and let it, you try and use AI yourself to like make yourself more efficient and make everything more efficient. And I promise you will like be successful, right? I think that's really the view is not necessarily that there's one specific spot because every layer of the supply chain has, you go to the conference there, you go to talk to the experts there.
It's like, dude, this is the stuff that's breaking and we could innovate in this way. Or like these five extraction layers, we could innovate this way.

Yeah, do it.

There's so many layers where this is,

we're not at the Pareto optimal, right?

Like there's so much more to go in terms of innovation and inefficiency.

All right, I think that's a great place to close.

Dylan, John, thank you so much for coming on the podcast.

I'll just give people the reminder,

Dylan Patel, semianalysis.com.

That's where you can find the technical breakdowns that we've been discussing today.

Asianometry, YouTube channel.

Everybody will already be aware of Asianometry, but anyways.

Thanks so much for doing this.

It was a lot of fun.

Thank you.

Yeah.

Thank you.