Dylan Patel & Jon (Asianometry) – How the Semiconductor Industry Actually Works
A bonanza on the semiconductor industry and hardware scaling to AGI by the end of the decade.
Dylan Patel runs Semianalysis, the leading publication and research firm on AI hardware. Jon Y runs Asianometry, the world’s best YouTube channel on semiconductors and business history.
* What Xi would do if he became scaling pilled
* $ 1T+ in datacenter buildout by end of decade
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Sponsors:
* Jane Street is looking to hire their next generation of leaders. Their deep learning team is looking for FPGA programmers, CUDA programmers, and ML researchers. To learn more about their full time roles, internship, tech podcast, and upcoming Kaggle competition, go here.
* This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
If you’re interested in advertising on the podcast, check out this page.
Timestamps
00:08:25 – How semiconductors get better
00:11:16 – China can centralize compute
00:18:50 – Export controls & sanctions
00:32:51 – Huawei's intense culture
00:38:51 – Why the semiconductor industry is so stratified
00:40:58 – N2 should not exist
00:45:53 – Taiwan invasion hypothetical
00:49:21 – Mind-boggling complexity of semiconductors
00:59:13 – Chip architecture design
01:04:36 – Architectures lead to different AI models? China vs. US
01:10:12 – Being head of compute at an AI lab
01:16:24 – Scaling costs and power demand
01:37:05 – Are we financing an AI bubble?
01:50:20 – Starting Asianometry and SemiAnalysis
02:06:10 – Opportunities in the semiconductor stack
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Press play and read along
Transcript
Speaker 1 Today I'm chatting with Dylan Patel who runs Semi-Analysis and John who runs the Asianometry YouTube channel.
Speaker 2
Does he have a last name? No, I do not. No, I'm just kidding.
John Y. What's right, is it? I'm John Y.
Wait, why is it only one letter?
Speaker 2 Because why is the best letter?
Speaker 2 Why is your face covered?
Speaker 2 Why not?
Speaker 1 No, seriously, why is it covered?
Speaker 2 Because I'm afraid of looking myself get older and fatter over the years.
Speaker 1 But so seriously, it's like anonymity, right?
Speaker 2
Anonymity. Okay.
Yeah.
Speaker 1 By the way, so did you know what Dylan's middle name is?
Speaker 2
Actually, no. No, he told me.
What's my father's name? I'm not going to say it, but I remember.
Speaker 1 You could say it.
Speaker 2
It's fine. Sanjay? Yes.
What's his middle name? Sanjay. That's right.
Wow.
Speaker 1
So I'm Dwarkash Sanjay Patel. He's Dylan Sanjay Patel.
It's like literally my white name.
Speaker 2 Wow.
Speaker 2
It's unfortunate my parents decided between my older brother and me to give me a white name. And It could have been Dwark Ashton.
Like, you know how amazing it would have been if we had the same name?
Speaker 2 Like butterfly effect and all that. We probably would have all wouldn't have turned out the same way.
Speaker 1 Maybe it would have been even closer if we would have met each other sooner, you know? Who else is named Dwark Esther on Divotel in the world?
Speaker 2 Yeah, yeah, yeah, yeah.
Speaker 1 All right, first question. If you're a Xi Jinping and you're scaling pilled, what is it that you do?
Speaker 2
Don't answer that question, John. That's bad for AI safety.
I would basically be contacting every foreigner.
Speaker 2
I would be contacting every Chinese national with family back home and saying, I want information. I want to know your recipes.
I want to know, I want to talk about what you're talking about.
Speaker 2 Honeypotting open AI?
Speaker 1 I would basically, like,
Speaker 2
this is totally off-cycle, but like, this is off the reservation. But, like, I was doing a video about Yugoslavia's nuclear program.
What happens?
Speaker 1 Nuclear weapons program.
Speaker 2
Started absolutely nothing. One guy from Paris.
Uh-huh. And then one guy in Paris, he showed up and he was like, and then he had, who knows what he did.
Speaker 2 He knows a little bit about making atomic nuclear weapons, but like, he was like, okay, well, do I need help? And then the state secret police is like, I will get you everything.
Speaker 2
And then, like, I shouldn't do that. I must get you everything.
And for like a span of four years,
Speaker 2 they basically
Speaker 2 drew up a list. What do you need? What do you want? What are you going to do?
Speaker 2 What is it going to be for? And they just, state police just got everything.
Speaker 2 If I was running a country and I needed catch up on that, that's the sort of thing that I would be doing.
Speaker 1 So, okay, let's talk about the espionage. So
Speaker 1 what is the most valuable piece of if you could have this blueprint, like this, this
Speaker 1 like one megabyte of information, do you want it from TSMC? Do you want it from NVIDIA? Do you want it from OpenAI?
Speaker 1 What is the first thing you would try to steal?
Speaker 2 I mean, I guess you have to stack every layer, right? And I think
Speaker 2 the beautiful thing about AI is because it's growing so freaking fast, every layer is being stressed to some incredible degree.
Speaker 2 Of course, China has been hacking ASML for over five years and ASML is kind of like, oh, that's fine. The Dutch government's really pissed off, but it's fine.
Speaker 2 I think they already have those files in my view.
Speaker 2 It's just a very difficult thing to build.
Speaker 2 I think the same applies for like fab recipes, right? They can poach Taiwanese nationals very like.
Speaker 2 Not that difficult, right? Because TSMC employees do not make absurd amounts of money.
Speaker 2 You can just poach them and give them a much better life. And they have, right? A lot of SMICS employees are TSMC,
Speaker 2 you know, Taiwanese nationals, right? A lot of the really good ones, high-up ones, especially, right?
Speaker 2 And then you go up like the next layers of the stack and it's like,
Speaker 2 I think, yeah, of course, there's tons of model secrets.
Speaker 2 But then like, you know, how many of those model secrets do you not already have and you just haven't deployed or implemented, you know, organized, right?
Speaker 2 That's the one thing I would say is like China just hasn't, they clearly are still not scale-pilled in my view.
Speaker 1 So these people are,
Speaker 1 I don't know, if you could like hire them, it would probably be worth a lot to you, right?
Speaker 1 Because you're building a fab that's worth tens of billions of dollars and this talent is like they know a lot of
Speaker 1 um
Speaker 1 how often do they get poached do they get poached by like foreign adversaries or do they just get poached by other companies within the same industry but in the same country um and then yeah like why doesn't that like sort of drive up their wages i think it's because it's it's very compartmentalized and i think like back in the 2000s prior to ts before smick got big
Speaker 2 it was actually much more kind of open more flat i think after that there was like after the Among Song and after all the Samsung issues and after all the SMIC's rise, when you literally saw.
Speaker 2 I think you should tell that story, actually.
Speaker 2
The TSMC guy that went to Samsung and SMIC and all that. I think you should tell that story.
There's two stories.
Speaker 2 There's a guy, he ran a semiconductor company in Taiwan called Worldwide Semiconductor, and this guy, Richard Chang, was very religious.
Speaker 2 I mean, all the TSMC people are pretty religious, but like he in particularly was very fervent and he wanted to bring religion to China.
Speaker 2 So after he sold his company to TSMC, huge coup for TSMC, he worked there for about eight or nine months and he was like, all right, I'll go to China.
Speaker 2 Because back then, there was the relations between China and
Speaker 2
Taiwan were much more different. And so he goes over there.
Shanghai says, we'll give you a bunch of money. And then Richard Chang basically recruits half of like a whole bunch.
Speaker 2 It's like a conga line of like Taiwanese collide. Just like they get on the plane, they're flying over.
Speaker 2 And generally, that's actually a lot of like a lot of like acceleration points within China's semiconductor industry. It's from talent flowing from Taiwan.
Speaker 2
And then the second thing was like Liam Mong Song. Liuang Mong Song was a nut.
And I've met him. I've not met him.
I've met people who work with him, and they say he is a nut.
Speaker 2
He is probably on the spectrum. And he does not care about people.
He does not care about business. He does not care about anything.
He wants to take it to the limit.
Speaker 2
The only thing, that's the only thing he cares about. He worked from TSMC, literal genius, 300 patents or whatever, 285, goes, works all the way to like the top, top tier.
And then one day
Speaker 2 he decides he loses out on some sort of power game within tsmc and gets demoted and he was like head of r d right or something he was like one of the top r d he was like second or third place for the head of r d position correct more of the head r d position he's like i can't deal with this and he goes to samsung and he steals a whole bunch of talent from tsmc literally again congaline goes and just emails people say we will pay At some point, some of these people were getting paid more than the Samsung chairman, which not really comparable, but like, you know what I mean.
Speaker 2
So they're the Samsung chairman usually like part of the family that owns Samsung? Correct them. Okay, yeah, so it's like kind of irrelevant.
So
Speaker 2
he goes over there and he's like, well, I'm like, we will make Samsung into this monster. We forget everything.
Forget all of the stuff you've been trying to do, like incremental.
Speaker 2
Toss that out. We are going to the leading edge and that is it.
They go to the leading edge. The guys like...
They win Apple's business. They win Apple's business.
They win it back from TSMC.
Speaker 2 Or did they win it back from TSMC? They had a portion portion of the. They had a big portion of it.
Speaker 2 And then TSMC Morris Tang is like, at this time, was running the company and he's like, I'm not letting this happen. Because that guy,
Speaker 2 toxic to work for as well, but also goddamn brilliant and also like very good at motivating people. He's like, we will work literally day or night.
Speaker 2 Sets up what is called the Nightingale Army, where you have, they split a bunch of people and they say,
Speaker 2
you are working R ⁇ D night shift. There is no rest at the TSMC fab.
You will go in.
Speaker 2 As you go in, there'll be a day shift going out. They called it the, it's like you're burning your liver.
Speaker 2 Because in Taiwan, they say, like, if you get old, like, as you work, you're sacrificing your liver. They call it the liver buster.
Speaker 2 So they basically did this nightingale armory for like a year, two years.
Speaker 2 They finish FinFet.
Speaker 2 They basically just blow away Samsung. And at the same time, they sue Naom Mong Song directly for stealing trade secrets.
Speaker 2 Samsung basically separates from Neal Mong Sung and Neo Mung Sung goes to Smick. And so Samsung, like at one point, was better than TSMC.
Speaker 2
And then, yeah, he goes to Smick, and Smick is now better than, well, or not better, but they caught up rapidly as well after. Very rapid.
That guy's a genius. That guy's a genius.
I mean,
Speaker 2 I don't even know what to say about him. He's like 78, and he's like beyond brilliant, does not care about people.
Speaker 1 Like, yeah, what is research to make the next process node look like?
Speaker 1 Is it just a matter of like 100 researchers go in, they do like the next N plus one, then the next morning the next 100 researchers go in?
Speaker 2
It's experiments. They have a recipe and what they do.
Every recipe, a TSMC recipe is the culmination of a long, long years of like research, right? It's highly secret.
Speaker 2 And the idea is that what you're going to do is you go, you look at one particular part of it and you say, experiment, run experiment. Is it better? Is it not? Is it better or not?
Speaker 2 Kind of a thing like that. You're basically,
Speaker 2 it's a multivariable problem that each, every single tool, sequentially you're processing the whole thing. You, you turn up knobs up and down on every single tool.
Speaker 2 You can increase the pressure on this one specific deposition tool.
Speaker 1 And what are you trying to measure? Is it like, does it increase yield?
Speaker 2 Or like, what is it that it's not, it's yield, it's performance, it's power. It's not just a one, it's not just better or worse, right? It's a multivariable search space.
Speaker 1 And what do these people know such that they can do this? Is it they understand the chemistry and physics?
Speaker 2
So it's a lot of intuition, but yeah, it's it's PhDs in chemistry, PhDs in in physics, PhDs in EE. Brilliant geniuses, people.
And they all just.
Speaker 2 And they don't even know about like the end chip a lot of times. It's like, oh, I am an etch engineer, and all I focus on is how hydrogen fluoride etches this, right? And that's all I know.
Speaker 2 And like, if I do it at different pressures, if I do it at different temperatures, if I do it with a slightly different recipe of chemicals, it changes everything.
Speaker 2 I remember, like, someone told me this when I was speaking, like, how did America lose the ability to do this sort of thing, like etch and hydrofluoric and acid, all of that?
Speaker 2 I told them, like, he told me basically was like, it's, it's very apprentice, master apprentice. Like, you know, in Star Wars, Sith, there's only one, right? Master apprentice, master apprentice.
Speaker 2
It used to be that there is a master, there's an apprentice, and they pass on this secret knowledge. This guy knows nothing but etch, nothing but etch.
Over time, the apprentices stop coming.
Speaker 2
And then in the end, the apprentices move to Taiwan. And that's the same way it's still run.
Like you have the NTU and NTHU, Tsinghua University, National Tsinghua University.
Speaker 2 There's a bunch of masters, they teach apprentices, and they just pass this secret, sacred knowledge down.
Speaker 1 Who are the most AGI-pilled people in the supply chain? Is there anybody who's like the hardest thing?
Speaker 2
I gotta have my phone call with Colette right now. Okay, go for it.
Sorry, sorry.
Speaker 1 Can we mention that the podcast and NVIDIA has got is calling Dylan for the for to update him on the earnings call?
Speaker 2 Well, it's not this, not exactly, but go for it, go for it.
Speaker 1 Yeah, so Dylan is back from his call with Jensen Huang.
Speaker 2 Just not with Jensen, Jesus.
Speaker 1 What did they tell you, huh? What did they tell you about next year's earnings?
Speaker 2 No, it was just color around like a hopper blackwell and like margins. It's like quite boring stuff
Speaker 2 for most people. I think it's interesting, though.
Speaker 1 I guess we could start talking about NVIDIA. But you know what?
Speaker 2 But before we do, I think we should go back to China. There's like a lot of points there.
Speaker 1 All right, we covered the trips themselves. How do they get the 10 gigawatt data center up? What else do they need?
Speaker 2 I think there is a true
Speaker 2 question of how decentralized do you go versus centralized, right? And if you look in the US, right, right, as far as like labs and such,
Speaker 2 the, you know, OpenAI, XAI, you know, Anthropic, and then Microsoft having their own effort, Anthropic having their own efforts despite having their partner, and then Meta.
Speaker 2 And, you know, you go down the list, it's like there's quite a decentralization of, and then all the startups, like interesting startups that are out there doing stuff, there's quite a decentralization of efforts.
Speaker 2 Today in China, it is still quite decentralized, right? It's not like Alibaba, Baidu, you are the champions, right? You have like DeepSeek, seek, like, who the hell are you?
Speaker 2 Does government even support you like doing amazing stuff, right? If you are Zijingping and scale-pilled,
Speaker 2 you must now centralize the compute resources, right? Because you have, you have sanctions on how many NVIDIA GPUs you can get in. Now, they're still north of a million a year, right?
Speaker 2 Even post October last year's sanctions, they still have more than a million H20s and other hopper GPUs getting in through, you know, other means, but legally like the H20s.
Speaker 2 And then on top of that, you have have
Speaker 2 your domestic chips, right? But that's less than a million chips. So then when you look at it, it's like, oh, well, we're still talking about a million chips.
Speaker 2 The scale of data centers people are training on today slash over the next six months is 100,000 GPUs, right? OpenAI, XAI, right? These are like quite well documented and others.
Speaker 2 But in China, they have no individual system of that scale yet, right? So then the question is like, how do we get there?
Speaker 2 No company has had the centralization push to have a cluster that large and train on it yet, at least publicly like well known.
Speaker 2 And the best models seem to be from a company that has got like 10,000 GPUs, right? Or 16,000 GPUs, right? So it's not, it's not quite as centralized as the US companies are.
Speaker 2 And the US companies are quite decentralized. If you're Xi Jinping and you're scale-pilled, do you just say
Speaker 2 XYZ company is now in charge and every GPU goes to one place? And then you don't have the same issues with the US, right?
Speaker 2 In the US, we have a big problem with being able to build big enough data centers, being able to build substations and transformers and all this that are large enough in a dense area.
Speaker 2 China has no issue with that at all because their supply chain adds like as much power as like half of Europe every year, right? Like or some absurd statistics, right?
Speaker 2 So they're building transformer substations, they're building new power plants constantly.
Speaker 2 So they have no problem with like getting power density. And you go look at like Bitcoin mining, right?
Speaker 2 Around the Three Gorges Dam, at one point at least, there was like 10 gigawatts of like Bitcoin mining estimated, right?
Speaker 2 Um, which you know, we're talking about, you know, gigawatt data centers are coming over, you know, 26, 27 in the or 26 in the US or 27, right?
Speaker 2 You know, sort of this is an absurd scale relatively, right?
Speaker 2 We don't have gigawatt data centers, you know, ready, but like China could just build it in six months, I think, around the Three Gorges Dam or many other places, right?
Speaker 2 Because they have they have the ability to do the substations, they have the power generation capabilities, everything can be like done like a flip of a switch but they haven't done it yet and then they can centralize the chips like crazy right now oh oh million chips that nvidia is shipping in q3 and q4 the h20 um let's just put them all in this one data center they just haven't had that centralization effort well you can argue that like the more you centralize it the more you start building this monstrous thing within the industry you start getting attention to it and then suddenly you know lo and behold you have a little bit of a little worm in there suddenly where you're doing your big training run.
Speaker 2
Oh, there's GPU off. Oh, there's GPU.
Oh, no. Oh, no.
Oh, no.
Speaker 2 I don't know if it's like...
Speaker 1 No, he's got a Chinese accent, by the way.
Speaker 2
Just to be clear, John is East Asian. He's Chinese.
I am of East Asian descent. Half Taiwanese, half Chinese.
Right. That is right.
Speaker 2 But like, I think, I think, I don't know if that's like as simple as that to like,
Speaker 2 because training systems are like fire, like they're, they're water, is it water-gated? Firewalled? What is it called? Not firewalled. I don't know.
Speaker 2 There's a word for that where they're not like what air gapped air gapped i think you're just like
Speaker 1 you're going through like the all the like the four elements
Speaker 2 they're earth protected water fire if you're
Speaker 2 scale pilled
Speaker 2 you're kind of like the four
Speaker 2 airbenders fuck the firebenders you know we got the avatar right like you have to build the avatar okay um i think i think that's possible um the question is like does that slow down your research do you like crush like cracked people like deep seek uh who are like clearly like not being you know influenced by the government and put some like idiot like you know idiot bureaucrat at the top suddenly he's all thinking about like you know all these politics and he's trying to deal with all these different things suddenly you have a single point of failure and that's a that's that's bad but i mean in the in on the flip side right like there is like obviously immense gains from being centralized because of the scaling loss right and then and then the flip side is compute efficiency is obviously going to be hurt because
Speaker 2 you can't experiment and like have different people lead and try their efforts as much if you're less centralized, or more, more centralized. So it's like there is a balancing act there.
Speaker 1 The fact that they can centralize, I didn't think about this, but that is actually like, because, you know, even if America as a whole is getting millions of GPUs a year, the fact that any one company is only getting hundreds of thousands or less means that there's no one person who can do a single trading run as big in America as if like China as a whole decides to to do one together.
Speaker 1 The 10 gigawatts you mentioned near the three gorges dam, is it like literally like,
Speaker 1 how widespread is it? Like a state? Is it like one wire? Like how?
Speaker 2 I think like between not just the dam itself, but like also all of the coal, there's some nuclear reactors there, I believe, as well.
Speaker 2 Between all of and like renewables like solar and wind, between all of that in that region, there is an absurd amount of concentrated power that could be built.
Speaker 2 I don't think it's like, I'm not saying it's like one button, but it's like, hey, within X mile radius, right? Yeah. Is more of like the correct way to frame it.
Speaker 2 And that's how the, that's how the labs are also framing it, right? Like
Speaker 2 in the U.S.
Speaker 1 If they started right now, like how long does it take to build the biggest the biggest AI data center that in the world?
Speaker 2 You know, actually, I think, I think, um, the other thing is like, could we notice it?
Speaker 2 I don't think so because the amount of like factories that are being spun up, the amount of other construction, manufacturing, et cetera, that's being built, a gigawatt is actually like a drop in the bucket, right?
Speaker 2
Like a gigawatt is not a lot of power. 10 gigawatts is not an absurd amount of power, right? It's okay.
Yes, it's like hundreds of thousands of homes, right?
Speaker 2 Yeah, millions of people, but it's like you got 1.4 billion people.
Speaker 2 You got like most of the world's like extremely energy intensive like refining and like, you know, rare earth refining and all these manufacturing industries are here.
Speaker 2 It would be very easy to hide it. It would be very easy to just like shut down.
Speaker 2 Like, I think the largest aluminum mill in the world is there and it's like it's like north of five gigawatts alone it's like oh what what could we tell if they stopped making aluminum there and instead started like making you know ais there or making ai there like i don't know if we could tell right because they could also just easily spawn like 10 other aluminum mills make up for the production and be fine right so like there's many ways for them to hide compute as well to the extent that you could just take out a five gigawatt aluminum refining center and like build a giant data center there, then I guess the way to control Chinese AI has to be the chips because like, everything else, they have so, like, uh, how do you like
Speaker 1 just like walk me through how many chips do they have now? How many will they have in the future? What will the like, how many is that in comparison to US and the rest of the world?
Speaker 2 Yeah, so so in the world, I mean, the world we live in is they are not restricted at all on like the physical infrastructure side of things in terms of power, data centers, et cetera, because their supply chain is built for that, right?
Speaker 2
And it's pretty easy to pivot that. Whereas the US adds so little power each year, and Europe loses power every year.
The Western sort of industry for power is non-existent in comparison, right?
Speaker 2 But on the flip side is quote-unquote Western, including Taiwan,
Speaker 2 chip manufacturing is way, way, way, way, way larger than China's, especially on leading edge where China theoretically has you know, depending on the way you look at it, either zero or a very small percentage share, right?
Speaker 2 And so there,
Speaker 2 you have, you have wafer, you have equipment, wafer manufacturing, and then you have advanced packaging capacity, right? And where the U.S. can control China, right?
Speaker 2 So, advanced packaging capacity is kind of a shot because the vast majority of the largest advanced packaging company in the world was Hong Kong headquartered.
Speaker 2 They just moved to Singapore, but like that's effectively
Speaker 2 in a realm where the US can't sanction it, right?
Speaker 2 A majority of these other companies are in similar places, right? So, advanced packaging capacity is very hard, right?
Speaker 2 Advanced packaging is useful for stacking memory, stacking chips on co-ops, right? Things like that. Then, the step down is wafer fabrication.
Speaker 2 There is immense capability to restrict China there.
Speaker 2 And despite the US making some sanctions, China in the most recent quarters was like 48% of ASML's revenue, right? So, you know, and like 45% of like applied materials. And you just go down the list.
Speaker 2 So it's like, obviously it's not being controlled that effectively, but it could be on the equipment side of things. The chip side of things is actually being controlled.
Speaker 2 quite effectively, I think, right? Like, yes, there is like shipping GPUs through Singapore and Malaysia and other countries in Asia to China. But the amount you can smuggle is quite small.
Speaker 2 And then the sanctions have limited the chip performance to a point where it's like, this is actually kind of fair, but there is a problem with how everything is restricted, right?
Speaker 2 Because you want to be able to restrict China from building their own domestic chip manufacturing industry that is better than what we ship them.
Speaker 2 You want to prevent them from having chips that are better than what we have.
Speaker 2 And then you want to prevent them from having AIs better. The ultimate goal being,
Speaker 2 if you read the restrictions, like very clear, it's about AI.
Speaker 2 Even in 2022, which is amazing, like at least the Commerce Department was kind of AI-pilled. It was like, is you want to restrict them from having AIs worse than us, right?
Speaker 2 So starting on the right end, it's like, okay, well, if you want to restrict them from having better AIs than us, you have to restrict chips, okay? If you want to restrict them from having chips,
Speaker 2 you have to let them have at least some level of chip that the West, also that is better than what they can build internally.
Speaker 2 But currently, the restrictions are flipped the other way, right? They can build better chips in China than we restrict them in terms of chips that NVIDIA or AMD or Intel can sell to China.
Speaker 2 And so there's sort of a problem there in terms of the equipment that is shipped can be used to build chips that are better than what the Western companies can actually ship them.
Speaker 1 John, Dylan seems to think the expert controls are kind of a failure.
Speaker 1 Do you agree with him?
Speaker 2
That is a very interesting question because I think it's like. Why, thank you.
Like, what do you.
Speaker 2 Dorcus, you're so good. Yeah, Dorcus, you're the best.
Speaker 2 I think failure is a tough word to say because I think it's like, what are we trying to achieve, right? Like, and say, they're talking about AI, right? Yeah. When you do sanctions like that,
Speaker 2 it's you need like such a deep knowledge of the technologies. You know, just taking lithography, right?
Speaker 2
If your goal is to restrict China from building chips and you just like boil it down to like, hey, lithography is 30% of making a chip. So 25%.
Cool, let's sanction lithography.
Speaker 2 Okay, where do we draw the line? Okay, let me ask, let me ask, let me figure out where the line is.
Speaker 2 And if I'm a bureaucrat, if I'm a lawyer at the commerce department or what have you, well, obviously, I'm going to go talk to ASML.
Speaker 2 And ASML is going to tell me this is the line because they know, like, hey, well, you know, this, this, this is, you know, there's like some blending over.
Speaker 2
There's like, they're like looking at like, what's going to cost us the most money. Right.
And then they constantly say, like, if you restrict us, then China will have their own industry. Right.
Speaker 2 And the way I like to look at it is like chip manufacturing is like
Speaker 2
3D chess or like, you you know, a massive jigsaw puzzle. And that if you take away one piece, China can be like, oh, yeah, that's the piece.
Let's put it in. Right.
Speaker 2 And currently, this export restrictions, year by year by year, they keep updating them. Ever since like 2018 or so, 19, right?
Speaker 2 When Trump started and now Biden's, you know, accelerated them, they've been like, they haven't just like take a bat to the table and like break it, right?
Speaker 2 Like it's like, let's take one jigsaw puzzle out, walk away, oh shit, let's take two more out. Oh shit, right?
Speaker 2 Like, you know, it's like, instead, if if they like, they, you either have to go kind of like full bat to the freaking like table slash wall or,
Speaker 2 or chill out, right? Like, and like, you know, let them, let them do whatever they want. Cause the alternative is everything is focused on this thing and they make that.
Speaker 2 And then now when you take out another two pieces, like, well, I have my domestic industry for this. I can also now make a domestic industry for these.
Speaker 2 Like, you go deeper into the tech tree or what have you. It's a very, it's art, right? In the sense that there are technologies out there that can compensate.
Speaker 2 Like if you believe the belief that lithography is a linchpin within the system is, it's not exactly true, right?
Speaker 2 At some point, if you keep pulling, keep pulling a thread, other things will start developing to kind of close that loop. And like, I think it's a, it's, it is, that's why I say it's an art, right?
Speaker 2 I don't think it can stop Chinese semiconductor industry, but the semiconductor industry from progressing. I think that's basically impossible.
Speaker 2 So the question is, the Chinese government believes in the primacy of semiconductor semiconductor manufacturing.
Speaker 2 They've believed it for a long time, but now they really believe it, right?
Speaker 2 To some extent, the sanctions have made China believe in the importance of the semiconductor industry more than anything else.
Speaker 1 So from an AI perspective, what's the point of X-Road controls then? Because even if, like, if they're going to be able to get these,
Speaker 1 like, if you're concerned about AI and they're going to be able to build it.
Speaker 2 Well, they're not centralized, though, right? So that's the big question is, are they centralized?
Speaker 2 And then also, you know, there's the belief, I don't really, I'm not sure if I really believe it, but like, you know, prior podcasts, podcasts, there have been people who talked about nationalization, right?
Speaker 2 In which case,
Speaker 2 okay, now you're talking about
Speaker 1 this ambiguously.
Speaker 2 You're like, well, I think there's like a
Speaker 2 component. Like, you know, no, but I think there have been a couple where people have talked about nationalization, right?
Speaker 2 But like, if you have, you know, nationalization, then all of a sudden you aggregate all the flops. It's like, no, there's no fucking way, right? Yeah.
Speaker 2
China can be centralized enough to compete with each individual U.S. lab.
They could have just as many flops in 25 and 26 if they decided they were scale-pilled, right? Just from foreign chips
Speaker 2 for an individual model.
Speaker 1 Like in 2026, they can train a 1E27, like they can release a 1E27 model by 2026.
Speaker 2 Yeah, and then a 28 model, you know, 1E28 model in the works, right? Like they totally could just with foreign chip supply, right? Just a question of centralization.
Speaker 2 Then the question is, like, do you have as much innovation and compute efficiency wins or what have you get developed when you centralize?
Speaker 2 Or does like Anthropic and OpenAI and XAI and Google like all develop things and then like secrets kind of shift a little bit in between each other and all that.
Speaker 2 Like, you know, you end up with that being a better outcome in the long term versus like the nationalization of the US, right? If that's possible and like, or, you know, and what happens there.
Speaker 2 But China could absolutely have it in 26, 27 if they have the desire to. And that's just from foreign chips, right? And then domestic chips are the other question, right? 600,000 of the
Speaker 2 Hasen 910B, which is roughly like 400 teraflops or so.
Speaker 2 You know, so
Speaker 2 if they put them all in one cluster, they could have a bigger model than any of the labs next year, right? I have no clue where all the Sen 910Bs are going, right?
Speaker 2 But I mean, well, there's like rumors about like some, they are being divvied up between the like major Alibaba, Byte Dance, Baidu, et cetera.
Speaker 2 And next year, more than a million. And it's possible that they actually do have, you know, 1E30 before the US because data center is not as big of an issue.
Speaker 2 10 gigawatt data center is going to be,
Speaker 2 I don't think anyone is even trying to build that today in the US, like even out to 27, 28. Really, they're focusing on like linking many data centers together.
Speaker 2 So there's a possibility that like, hey, come 2028, 2029, China can have more flops delivered to a single model,
Speaker 2 even ignoring sort of even once the centralization question is solved, right? Because that's clearly not happening today for either party.
Speaker 2 And I would bet if AI is like as important as you and I believe that they will centralize sooner than the West does.
Speaker 2 So, there is a possibility, right?
Speaker 1 Yeah, it seems like a big question then is how much could SMC
Speaker 1 either increase the product, like increase the amount of wafers, like how many more wafers could they make, and how many of those wafers could be dedicated to the night?
Speaker 1 Because I assume there's other things they want to do with these semi-connections.
Speaker 2 Yeah, so there's like two points, parts there, too, right?
Speaker 2 Like, so the way the US has sanctioned SMC is really like stupid, kind of, is that in that they've like sanctioned a specific spot rather than the entire company.
Speaker 2 And so, therefore, right, SMIC is still buying a ton of tools that can be used for their seven nanometer and their, you know, call it 5.5 nanometer process or six nanometer process for the 910C, which releases later this year, right?
Speaker 2 They can build as much of that as long as it's not in Shanghai, right? And Shanghai has
Speaker 2 anywhere from 45 to 50
Speaker 2 high-end immersion lithography tools is what's like believed by intelligence as well as like many other folks.
Speaker 2 That roughly gives them as much as 60,000 wafers a month of seven nanometer, but they also make their 14 nanometer in that fab, right?
Speaker 2 And so the belief is that they actually only have about like 25 to 35,000 of seven nanometer capacity
Speaker 2 wafers a month, right?
Speaker 2 Doing the math, right?
Speaker 2 of the chip die size and all these things, because Huawei also uses chiplets and stuff, so they can get away with using less leading edge wafers, but then their yields are bad.
Speaker 2 You can roughly say any, you know, something like 50 to 80
Speaker 2 chips per wafer um with their with their bad yield right with their bad why do you have bad yield uh because it's hard right you know they're you're you're uh even if it was like you know even everyone's knows the number right like it's a thousand steps even if you're 99 for each like 98 or 99 like in the end you'll still get a 40 yield overall interesting i think it's like even it's like 99 if i think it's like i think i think it's if it's six sigma of like or of like perfection and you have your 10 000 plus steps uh you end up with like yield is still dog shit by the end, right?
Speaker 2 Like, yeah,
Speaker 2 this is multiple. That's a scientific measure, dog shit percent.
Speaker 2 Um, as yeah, yeah, as a multiplicative effect, right? Yeah, um, so yields are bad because uh, they have hands tied behind their back, right?
Speaker 2 Like, um, A, they are not getting to use uh EUV, whereas on seven nanometer Intel never used EUV, but uh, TSMC eventually started using EUV. Initially, they used DUV, right?
Speaker 1 Doesn't that mean the expert control succeeded? Because that they have bad yield because they have to use like
Speaker 2
successes again. They still are determined.
Successes mean they stop. They're not stopping.
Going back to the yield question, right?
Speaker 2 Like, oh, theoretically, 60,000 wafers a month times 50 to 100 dies per wafer with yielded yielded dies. Holy shit, that's millions of GPUs, right? Now, what are they doing with most of their wafers?
Speaker 2 They still have not become skill-pilled, so they're still throwing them out. Like, let's make 200 million Huawei phones, right? Like, oh, okay, cool, I don't care, right?
Speaker 2 Like, as the West, you don't care as much, even though like Western companies will get screwed, like Qualcomm and like, you know, and Media Tech Taiwanese companies. So, so obviously there's that.
Speaker 2 And the same applies to the US. But when you flip to like...
Speaker 2 Sorry, I don't fucking know what I was going to say.
Speaker 2 Nailed it.
Speaker 2
We're keeping this in. That's fine.
That's fine. That's fine.
Speaker 1
Hey, everybody. I am super excited to introduce our new sponsors, Jane Street.
They're one of the world's most successful trading firms.
Speaker 1 I have a bunch of friends who either work there now or have worked there in the past, and I have very good things to say about those friends, and those friends have very good things to say about Jane Street.
Speaker 1 Jane Street is currently looking to hire its next generation of leaders. As I'm sure you've noticed, recent developments in AI have totally changed what's possible in trading.
Speaker 1 They've noticed this too, and they've stacked a scrappy, chaotic new team with tens of millions of dollars of GPUs to discover Signal that nobody else in the world can find.
Speaker 1 Most new hires have no background in trading or finance. Instead, they come from math, CS, physics, and other technical fields.
Speaker 1 Of particular relevance to this episode, their deep learning team is hiring CUDA programmers, FPGA programmers, and ML researchers. Go to jane street.com/slash dwarca to learn more.
Speaker 1
And now back to Dylan and John. 2026, if they're centralized, they can have as big trading runs as any one U.S.
company.
Speaker 2 Oh, the reason why I was bringing up Shanghai, they're building seven nanometer capacity in Beijing, they're building five nanometer capacity in Beijing, but the U.S. government doesn't care.
Speaker 2
And they're importing dozens of tools into Beijing. And they're saying to the U.S.
government and ASML, this is for 28 nanometer, obviously, right? This is not bad.
Speaker 2 And then obviously, you know, like in the background, you know, we're making five nanometer here.
Speaker 1 Are they doing it because they believe in AI or because they want to make Huawei phones?
Speaker 2 You know, Huawei was the largest TSMC customer for like a few quarters, actually, before they got sanctioned. Uh, Huawei makes most of the telecom equipment in the world, right?
Speaker 2 Uh, you know, phones, of course, modems, but of course, accelerators, networking equipment, you know, you go down the whole like video surveillance chips, right?
Speaker 2 Like, you kind of like go through the whole gambit. Yeah, a lot of that could use seven and five nanometer.
Speaker 2 Do you think the dominance of Huawei is actually a bad thing for the rest of the Chinese tech industry? I think Huawei is so fucking cracked that like it's it's hard to say that, right?
Speaker 2 Like, Huawei out-competes Western firms regularly with two hands tied behind their back.
Speaker 2 Like, you know, like, what the hell is Nokia and like Sony Ericsson? Like, trash, right? Like, compared to Huawei.
Speaker 2 And Huawei is not allowed to ship, sell to like European companies or American companies, and they don't have TSMC, and yet they still destroy them, right?
Speaker 2 And same applies to like the new phone, right? It's like, oh, it's like as good as like a year-old Qualcomm phone on a process node that's equivalent to like four years old, right? Or three years old.
Speaker 2 So it's like, wait, so they actually out-engineered us with a worse process node, you know, so it's like, oh, wow, okay. Like, you know, Huawei is, Huawei is like crazy cracked.
Speaker 2
Where do you think that culture comes from? The military, because it's the PLA. It is, we, we, it is generally seen as an arm of the PLA.
But like,
Speaker 2
how do you square that with the fact that sometimes the PLA seems to mess stuff up? Oh, like filling water and rockets. I don't know if that was true.
I'm denying.
Speaker 2 There is, there is like that, like, like, like, crazy conspiracy, not not care conspiracy it's like you can you don't know what the hell to believe in china especially as a not chinese person but like nobody knows even chinese people don't know what's going on in china there's like you know like all sorts of stuff like oh they're filling water in their rockets clearly they're like incompetent it's like look if i'm the chinese military i want the western world to like believe i'm completely incompetent because one day i can just like destroy the fuck out of everything right with all these hypersonic missiles and all this shit right like drones and like no no no no we're filling water in our missiles these are all fake we don't actually have a hundred thousand missiles that we manufacture in a facility that's like super hyper advanced and Raytheon is stupid as shit because they can't make missiles nearly as fast, right?
Speaker 2 Like, I think like that's also like a flip side is like how much false propaganda is there, right? Because there's a lot of like, no, SMC could never, SMC could never.
Speaker 2 They don't have the best tools, blah, blah, blah.
Speaker 2 And then it's like, motherfucker, they just shipped 60 million phones last year with this chip that performs only one year worse than like what Qualcomm has. It's like, proof is in the pudding, right?
Speaker 2
Like, you know, there's, there's a lot of like cope, if you will. I just wonder where it comes from.
I do, really do just wonder where that culture comes from.
Speaker 2
Like, there's something crazy about them, where they're kind of like, everything they touch, they seem to succeed in. And like, I kind of wonder why.
They're making cars.
Speaker 2 I wonder what's going on there.
Speaker 2 I think, like, if, like, supposedly, like, if we kind of imagine like historically, like, do you think they're getting something from somewhere? What do you mean? Espionage, you mean?
Speaker 2 Yeah, like, obviously.
Speaker 2 But like, East Germany and the Soviet industry was basically was just, it was like a conveyor belt of like secrets coming in and they just use that to run everything but the soviets were never good at it they could never mass produce it how would espionage explain how they can make things with different processes
Speaker 2 i don't think it's just espionage i think they're just like literally cracked
Speaker 2 they have the espionage without a doubt right like asml has been known to been hacked a dozen times right right or at least a few times right um and they've been known to have people sued who made it to china with a bunch of documents right not just asml but every fucking company in supply chain cisco code was literally in like early huawei like routers and stuff right like you go down the list, it's like everything is, but then it's like, no, architecturally, the Ascend 910B looks nothing like a GPU, it looks nothing like a TPU.
Speaker 2 It is like its own independent thing. Sure, they probably learned some things from some places, but like it is just like they're good at engineering.
Speaker 2 It's 996, like wherever that culture comes from, they do good.
Speaker 1 They do very good.
Speaker 1 Another thing I'm curious about is like, yeah, where that culture comes from, but like, how does it stay there? Because with American firms or any other firm,
Speaker 1 you can have a company that's very good, but over time, it gets worse, right? Like Intel or many others.
Speaker 1 Um, I guess Huawei just isn't that old of a company, but uh, like it's hard to like be a big company and like stay good.
Speaker 2 That is true. I think it's like, I think, like, what I think a lot, a word that I hear a lot in with regards to Huawei is a struggle, right?
Speaker 2 And China has a culture of like the Communist Party is like really big on struggle. I think like Huawei, in the sense they sort of brought that culture into some into their, in the way they do it.
Speaker 2 Like you said before, right? They, they, they, they go crazy because they think that in five years that they're going to fight the United States.
Speaker 2 And there was like, like, literally everything they do, every second is like, their country depends on it, right? It's, it's like, it's the Andy Grovian mindset, right?
Speaker 2 Like, shout out to like the base Intel, but like only the paranoids survive, right? Like paranoid Western companies do well. Why did why did Google like really screw the pooch on a lot of stuff?
Speaker 2 And then why are they like resurging kind of now? It's because they got paranoid as hell, right? But they weren't paranoid for a while.
Speaker 2 If Huawei is just constantly paranoid about like the external world and like, oh fuck, we're going to die. Oh, fuck, like, you know, they're going to beat us.
Speaker 2 Our country depends on it. We're going to get the best people from the entire country that are like, you know, the best at whatever they do.
Speaker 2 And tell them, you will, if you do not succeed, you will die.
Speaker 2
You will die. Your family will die.
Your family will be enslaved and everything. It will be terrible by the evil Western figs, right?
Speaker 1 Evil Western,
Speaker 2
like capitalists, or not capitalists. They don't believe in cats.
They don't say that anymore. But it's not going to like, you know, everyone is against China.
China is being, it's being
Speaker 2 defiled, right? And like, they're saying, like, if you, that is all on you, bro.
Speaker 2 Like, if you can't do that, then like, you, if you can't get that fucking radio to be slightly less noisy and like transmit like five percent more data, it's like the Great Palace fire all over again.
Speaker 2
The British are coming and they will steal all the, all the, all the trinkets and everything. Like, that's on you.
Uh-huh.
Speaker 1 Um, uh, why isn't there more vertical integration in this interconnected industry?
Speaker 1 Well, like, why are there like this sub-component requires this other subcomponent from this other company, which requires a subcomponent from another company?
Speaker 1 Like, why is more of it not done in-house?
Speaker 2 The way to look at it today is it's super, super stratified. And every industry has anywhere from one to three competitors.
Speaker 2 And pretty much the most competitive it gets is like 70% share, 25% share, 5% share
Speaker 2 in any layer of like manufacturing chips, anything, anything, chemicals, different types of chips. But it used to used to be vertically integrated.
Speaker 2 Or the very beginning, it was integrated, right?
Speaker 1 Where did that stop?
Speaker 2 What happened was, you know, the funniest thing was that like, you know, you had companies that used to do it all in the one, and then suddenly sometimes a guy would be like, I hate this.
Speaker 2 I think I know, I know how to do better, spins off, does his own thing, starts this company, goes back to his old company, says, I can sell you a product that's better, right?
Speaker 2 And that's the beginning of what we call the semiconductor manufacturing equipment industry. Like basically, like in the 70s, right? Like everyone made their own equipment.
Speaker 2 In the 70s, like you spin off all these people, and then what happened was that the companies that accepted, you know, these outside products and equipment got better stuff. They did better.
Speaker 2 Like you can talk about a whole bunch, Like, there are companies that were totally vertically integrated in semiconductor manufacturing for decades, and they are, they're still good, but they're nowhere near competitive.
Speaker 1 One thing I'm confused about is like the actual foundries themselves, there's like fewer and fewer of them every year, right?
Speaker 1 So, there's like more, maybe more companies overall, but like the final people who make the make the wafers, there's less and less.
Speaker 2 Uh,
Speaker 2 and then
Speaker 1 it's interesting in a way, it's similar to like the AI foundation models where
Speaker 2 you need to use like the revenues from like a previous model in order or like the your um market share to like fund the next round of ever more expensive development when tsmc launched the foundry industry right and when they started there was a whole wave of like asian companies that funded semiconductor foundries of their own you had malaysia with siltera you have singapore with chartered you had uh there was one there's worldwide there's a wide semiconductor where i talked about earlier there's one from Hong Kong bunch in Japan bunch in Japan like they all sort of did this thing right and I think the thing was that when you go into leading edge, when the thing is that, like, it got harder and harder, which means that you had to aggregate more demand from all the customers to fund the next node, right?
Speaker 2 So, technically, in the sense that what it's going to do is aggregating all this money, all this profit to kind of fund this next node to the point where now, like, there's no room in the market for an N2 and or N3.
Speaker 2 Like, technically, you could argue that
Speaker 2 economically, you can make an argument that N2 is a monstrosity that doesn't make sense economically, which should not exist in some ways without the immense single concentrated spend of like five players in the market.
Speaker 2
I'm sorry to like completely derail you, but like there's this video where it's like, this is an unholy concoction of meat slurry. Yes.
What?
Speaker 2 Sorry, there's like a video that's like, ham is disgusting. It's an unholy concoction of like meat with no bones or collagen.
Speaker 2 And like, I don't know, like, he was like, the way he was describing two dandy beater is kind of like that, right? It's like the guy who pumps his right arm so much that he's like super muscular.
Speaker 2 The human body was not meant to be so muscular. Like,
Speaker 1 what's the point? Like, why is two nanometer not justified?
Speaker 2 I'm not saying N2 is like N2 specifically, but I say N2 as a concept. The next node should technically, like, right now,
Speaker 2 there will come a point where economically, the next node will not be possible. Like, at all, right?
Speaker 2 Unless more, you know, technology spawned. Like, AI now makes
Speaker 2
one nanometer or whatever. It was a long period of time.
It was like 60 nanometers viable, right?
Speaker 1 So, like, right before DAI viable and what's in like
Speaker 1 months, money worth it.
Speaker 2 So, every two years, you get a shrink, right? Yeah. Like clockwork, Moore's Law.
Speaker 2
And then five nanometer happened. It took three years.
Holy shit. And then three nanometer happened.
It took three years.
Speaker 2
It took three years. Holy shit.
Like, is Moore's Law dead? Right? Like, because TSMC didn't. And then what did Apple do? Even on the third year of three of,
Speaker 2 or sorry, when three nanometer finally launched, they still only, Apple only moved half of the iPhone volume to three nanometer.
Speaker 2 So this is like, now they did a fourth year of five nanometer for a big chunk of iPhones, right? And it's like, oh, is the mobile industry petering out?
Speaker 2 Then you look at two nanometer and it's like going to be a similar, like very difficult thing for the, for the industry to pay for this, right?
Speaker 2 Apple, of course, they have, you know, because they get to make the phone, they have so much profit that they can funnel into like more and more expensive chips.
Speaker 2 But finally, like that was, that was really running out, right? It was two, two, how economically viable is two nanometers just for one player, TSMC. You know, ignore Intel, ignore Samsung.
Speaker 2 Just, you know, because Samsung is paying for it with memory, not with their actual profit. And then Intel is paying it from it from their former CPU monopoly.
Speaker 2 Private equity money. And now private equity money and debt and subsidies.
Speaker 2 People's salaries. But like, anyways, like, you know, there's, there's a strong argument that like
Speaker 2 funding the next node would not be economically viable anymore if it weren't for AI taking off, right? And then generating all this humongous demand for the most leading edge chip.
Speaker 1 So, and
Speaker 1 how big is the difference between seven to five to three nanometer? Like,
Speaker 1 is it a huge deal in terms of like who can build the biggest cluster?
Speaker 2 So, there's the there's this simplistic argument that, like, oh, moving a process node only saves me X percent in power, right? And that has been petering out, right?
Speaker 2 You know, when you move from like 90 nanometer to 80 something, right, or 70 something, right? It was like, you got two X, right? Dinard scaling scaling was still intact, right?
Speaker 2 But now when you move from five nanometer to three nanometer, first of all, you don't double density.
Speaker 2 SRAM doesn't scale at all.
Speaker 2 Logic does scale, but it's like 30%. So all in all, you only save like 20% in power per transistor.
Speaker 2 But because of like data locality and movement of data, you actually get a much larger improvement in power efficiency by moving to the next node than just the individual transistors power efficiency benefit because, you know, for example, you're multiplying a matrix that's like, you know, 8,000 by 8,000 by 8,000.
Speaker 2 And then you can't fit that all on one chip. But if you could fit more and more, you have to move off chip less, you have to go to memory less, et cetera, right? So the data locality helps a lot too.
Speaker 2 But
Speaker 2 the AI really, really, really wants new processed nodes because of A, power used is a lot less now.
Speaker 2 Higher density, higher performance, of course. But the big deal is like, well, if I have a gigawatt data center, I can now, how much more flops can I get?
Speaker 2 If I have two gigawatt data center, how much more flops can I get? If I have a 10 gigawatt data center, how much more flops can I get?
Speaker 2 And you look at the scaling, it's like, well no everyone needs to go to the most recent process node as soon as possible I want to ask the normie question uh for like everybody's
Speaker 1 I want to phrase it that way okay I want to ask a question that's like normie
Speaker 2 not for you nerds
Speaker 2 I think I think John and I could communicate to the point where you wouldn't even know what the fuck we're talking
Speaker 1 Okay,
Speaker 1 suppose Taiwan is invaded or Taiwan has an earthquake, nothing is shipped out of Taiwan
Speaker 1 from now on. What happens next? The rest of the world, how would it feel its impact? A day in, a week in, a month in, a year in?
Speaker 2
I mean, it's a terrible thing. It's a terrible thing to talk about.
I think it's like, can you just say it's all terrible? Everything's terrible. Because it's not just like leading edge.
Speaker 2 Leading edge, people will focus on leading edge, but there's a lot of trailing edge stuff that like people depend on every day. I mean, we all worry about AI.
Speaker 2
The reality is you're not going to get your fridge. You're not going to get your cars.
You're not going to get everything. It's terrible.
And then there's the human part of it, right?
Speaker 2
It's all terrible. Can we like it's depressing? I think.
And I live there. Yeah.
I think day one, market crashes a lot, right?
Speaker 2 You got to think about like, I think, I think the big, like, big, six, six, six biggest companies, Magnificent 7, whatever the heck it's called, are like 60, 75% of SP 500.
Speaker 2 And their entire business relies on chips, right? Google, Microsoft, Apple, Nvidia, you know, you go down the list, right? They're all meta, right? They all entirely rely on AI.
Speaker 2 And you would have a tech reset, like extremely insane tech reset, by the way, right? Like, so market would crash a week, a day in, a couple of weeks in, right? Like, people are preparing now.
Speaker 2
People are like, oh, shit, like, let's start building fabs. Fuck all the environmental stuff.
Like, war is probably happening.
Speaker 2 But, but, like, the supply chain is trying to like figure out what the hell to do to refix it. But six months in,
Speaker 2 the supply of chips for making new cars gone or sequestered to make military shit, right? You can no longer make cars.
Speaker 2 And we don't even know how to make non semiconductor induced cars right like this unholy concoction with all these like chips right uh you uh car is like 40 chips now like it's just chips on in the tire there's like there's like 2000 plus chips right every tesla door handle has like four chips in it's like what the fuck like why um like like but like it's like it's like shitty like microcontrollers and stuff but like there's like 2000 plus chips even in an in in an ice vehicle like internal combustion engine vehicle right and every engine has dozens of dozens of chips right um anyways this all shuts down down because not all of the production.
Speaker 2 There's some in Europe, there's some in the US, there's some in Japan. Yeah,
Speaker 2
they're going to bring in a guy to work on Saturday until four. Yeah, yeah.
I mean, yeah. So you have like TSMC always builds new fabs.
Speaker 2 That old fab, they like tweak production up a little bit more and more, and new designs move to the next, next, next node, and old stuff fills in the old notes, right?
Speaker 2 So, you know, ever since TSMC has been the most important player, and not just TSMC, there's UMC there, there's PSMC there, there's a number of other companies there, Taiwan's share of like total manufacturing has grown every single process node.
Speaker 2 So in like 130 nanometer, there's a lot, and including like many chips from like Texas Instruments or analog devices or like NXP, like all these companies, 100% of it is manufactured in Taiwan, right?
Speaker 2
By either TSMC or UMC or whatever. But then you like step forward and forward and forward, right? Like 28 nanometer, like 80% of the world's production of 28 nanometers in Taiwan.
Oh, fuck, right?
Speaker 2 Like, you know, and everything in 28 nanometers, like what's made on 28 nanometer today? Tons of microcontrollers and stuff, but also like every display driver I see.
Speaker 2 Like, cool, like, even if I can make my Mac chip, I can't make the chip that drives the display.
Speaker 2 Like, you know, you just go down the list, like, everything, no fridges, no, no automobiles, no, no weed whackers because that shit has my toothbrush has fucking Bluetooth in it, right? Like, why?
Speaker 2 I don't know, but like, you know, there's like so many things that, like, just like, poof, we're tech reset.
Speaker 1 We were supposed to do this interview like many months ago, and then I kept like delaying because I'm like, ah, I don't understand any of the shit.
Speaker 1 But, like, it is like a very difficult thing to understand. But I feel like with AI, it's like,
Speaker 2
it's not that like. No, you've just spent time.
You've spent time.
Speaker 1 But I also feel like it's like less complicated.
Speaker 1 It feels like it's a kind of thing where in an amateur kind of way, you can
Speaker 1 pick up what's going on in the field. In this field, the thing I'm curious about is
Speaker 1 how does one learn the layers of the stack? Because the layers of the stack are like, there's not just the papers online. You can't just look up the tutorial and how the transformer works or whatever.
Speaker 1 It's like
Speaker 1 many layers of really difficult.
Speaker 2 There are like 18-year-olds who are just cracked at AI already, right? And like, there's high school dropouts that get like jobs at Open AI. This existed in the past, right?
Speaker 2 Pat Galsinger, current CEO of Intel, went straight to work. He was, he like grew up in the Amish area of Pennsylvania and he went straight to work at Intel, right? Because he's just cracked, right?
Speaker 2 That is not possible in semiconductors today. You can't even get like a job at like a tool company without like at least like a freaking master's in chemistry, right? And probably a PhD, right? Like
Speaker 2 of the like 75,000 TSMC workers, it's like 50,000 have a PhD or something insane, right?
Speaker 2 It's like, okay, this is like, there's like some, there's like a next level amount of like how specialized everything's gotten.
Speaker 2 Whereas today, like, you can take like, you know, Shoto, you know, he, when did he start working on AI? Not that long ago. Not to say anything bad about Sholto.
Speaker 2
But he's cracked. He's like omega cracked at like what he does.
What he does, you could pick him up and drop him into another part of the AI stack. First of all, he understands it already.
Speaker 2 And then second of all, he could probably become cracked at that too. Right.
Speaker 2
Whereas that is not the case in semiconductors, right? You can't, you, one, you like specialize like crazy. Two, you can't just pick it up.
Um,
Speaker 2 you know, like Chilter, I think, what did he say?
Speaker 1 He like just started like he was a consultant in McKinsey, and at like night, he would like read papers about robotics and like run experiments and whatever.
Speaker 2 Yeah, and then, and then, like, he like was like, like, people noticed, he was like, who the hell is this guy? And why is he posting this?
Speaker 2 Like, I thought everyone who knew about this was at Google already, right? It's like, come to Google, right? That can't happen in semiconductors, right?
Speaker 2 Like it's just not like conducively, like, it's not possible, right? One, archive is like a free thing.
Speaker 2 The paper publishing industry is like abhorrent everywhere else, and you just like cannot download IEEE papers or like SPIE papers or like other organizations. And then two, at least up until like
Speaker 2 late 2022, really early 2023 in the case of Google, right?
Speaker 2 I think what the Palm Inference paper, up until the Palm Inference paper, before that, all the good best stuff was just posted on the internet after that you know it's kind of a little bit clamping down by the labs but there's also still all these other companies making innovations in the public that and and like what is state of the art is public that is not the case in semiconductors semiconductors have been shut down since 1960s 1970s basically i mean like it's kind of crazy how little information has been formally transmitted from one one country to another like the last time you could really think of this was like 19 maybe the samsung era right so then how do you guys keep up with it well we don't know it i don't personally i don't think i know it.
Speaker 2 I don't, I mean, I
Speaker 1 don't know it.
Speaker 2 It's crazy because like, like, there's a guy, there's like, I spoke to one guy, he's like a PhD in Edge or something.
Speaker 2 The world, one of the top people in Edge, and he's like, man, you really know like lithography, right? And I'm just like, I don't feel like I know lithography.
Speaker 2 But then you talk to the people who know lithography, you, you've done pretty good work in packaging, right? Nobody knows anything.
Speaker 1 They all have gelman amnesia.
Speaker 2 They're all in this like single well, right?
Speaker 1 They're digging deep.
Speaker 2 They're digging deep for what they're getting at, but they, but, you know, they don't know the other stuff well enough. And in some ways, I mean, nobody knows the whole stack.
Speaker 2 Nobody knows the whole stack.
Speaker 2 The stratification of just like manufacturing is absurd. Like the tool people don't even know exactly what Intel and TSMC do in production and vice versa.
Speaker 2 They don't know exactly how the tool is optimized like this. And it's like, how many different types of tools there are? Dozens.
Speaker 2 And each of those has like an entire tree of like all the things that we've built, all the things we've invented, all the things that we continue to iterate upon.
Speaker 2 And then like, here's the breakthrough innovation that happens every few years in it too.
Speaker 1 So if that's the case, if like nobody knows a whole stack, then how does the industry coordinate to be like,
Speaker 1 you know, in five, in two years, we want to go to the next process, which has gate all around. And for that, we need X tools and X technologies developed by whatever.
Speaker 2 That's really fascinating. It's a fascinating social kind of phenomenon, right? You can feel it.
Speaker 1 I went to Europe earlier this year.
Speaker 2 Dylan was like, had allergies. But like, I was like, talking to those other people issues.
Speaker 2 And you you can just it's like gossip it's gossip you start feeling the you start feeling people can coalescing around like a something right early on we used to have like semitech where people all these american companies came together and talked and they came and they hammered out right but semitech in reality was dominated by a single company right and but then you know nowadays it's a little more dispersed right you feel you feel like a it's like it's like it's like a it's a blue moon horizon kind of thing like they are going towards something they know it and then suddenly the comp whole industry is like this is it let's do it but i think it's like god came and proclaimed it we will shrink density 2x every two years gordon moore he made an observation and then like it didn't go nowhere as it went way further than he ever expected because it was like oh there's line of sight to get to here and here and like and he predicted like seven eight years out like multiple orders of magnitude of increases in transistors and it came true But then by then, the entire industry was like, this is obviously true.
Speaker 2 This is the word of God.
Speaker 2 And every engineer in the entire industry tens of millions of people like literally this is what they were driven to do now not every single engineer didn't believe it but like people were like yes to hit the next shrink we must do this this this right and this is the optimizations we make and then you have this abstratification every single layer and abstraction layers every single layer through the entire stack to where people
Speaker 2 It's it's it's an unholy concoction. I mean you keep saying this word, but like you no one knows what's going on because there's an abstraction layer between every single layer.
Speaker 1 And on this layer, the people below you and the people above you know what's going on and then like beyond that it's like okay i can like under try to understand but like not really like but but i guess it doesn't answer the question of like uh when irds or whatever i don't know was it 10 20 years ago i i watched your video about it where they're like we are euv is the net is like this is we're gonna do eov instead of the other thing and this is the path forward if how do they do that if they don't have the whole sort of picture of like different constraints different trade-offs different blah blah blah they kind of they argue it out.
Speaker 2
They get together and they talk and they argue. And basically at some point, a guy somewhere says, I think we can move forward with this.
Semiconductors are so siloed.
Speaker 2
And the data and knowledge within each layer is A, not documented online at all. Right.
Documentation. Because it's all siloed within companies.
Speaker 2 B, it is. There's a lot of human element to it because a lot of the knowledge, like as John was saying, is like apprentice master, apprentice master type of knowledge.
Speaker 2 Or I've been doing this for 30 years and there's an un
Speaker 2 an amazing amount of intuition on what to do just when you see something um to where like ai can't just learn semiconductors like that but at the same time there's a massive amount of talent shortage and ability to move forward on things right so like the technology used on like like
Speaker 2 most of the like equipment and semiconductor tool fabs runs on like windows xp right like that each tool has like a windows xp server on it or like, you know, like all the chip design tools like have like CentOS, CentOS, like version 6, right?
Speaker 2 And like, that's old as hell, right? So like, there's like so many like areas where like, why is this so far behind? At the same time, it's like so like hyper optimized.
Speaker 2
That's like the, the tech stack is so broken in that sense. They're afraid to touch it.
They're afraid to touch it. Yeah, because it's an unholy amalgamation.
It's an unholy. It should not be work.
Speaker 2
It should not work. This thing should not work.
It's literally a miracle.
Speaker 2 So you have all the abstraction layers, but then it's like, one is there's a lot of breakthrough innovation that can happen now stretching across abstraction layers.
Speaker 2 But two is because there's so much inherent knowledge in each individual one, what if I can just experiment and test at a thousand X velocity or a hundred thousand X velocity?
Speaker 2 And so some examples of where this is already like shown true is some of NVIDIA's AI layout tools, right? And Google as well, like laying out the circuits within a small blob of the chip with AI.
Speaker 2 Some of these like RL design things, some of these, there's a lot of like various like simulation things. Is that design or is that manufacturing?
Speaker 2
It's all design, right? Most of it's design. Manufacturing has not really seen much of this yet, although there is start, it's starting to come in.
Inverse lithography, maybe.
Speaker 2
Yeah, ILT and Sam, maybe. I don't know if that's AI.
That's not AI.
Speaker 2 Anyways, there's like tremendous opportunity to
Speaker 2 bring breakthrough innovation simply because there is so many like layers where things are unoptimized, right? So you see like all these like, oh, single-digit, mid-you know, low
Speaker 2 double digit like advantages just from like r l techniques from like alpha go type stuff like or like not our from alpha go but like like five six seven eight year old rl techniques being brought in but like generative ai being brought in could like really revolutionize the industry you know although there's a massive data problem so and can you give those uh Can you give the possibilities here in numbers in terms of maybe like a flaw per dollar or whatever the relevant thing here is?
Speaker 1 Like how much do you expect in the future to come from process node improvements? How much from just like how the hardware is designed because of AI?
Speaker 1
If you like how to disaggregate, we're talking specifically for like GPUs. Yeah.
Like if you had to disaggregate future improvements.
Speaker 2 I think, I think,
Speaker 2 you know, it's first, it's important to state that semiconductor manufacturing and design is the largest search base of any problem that humans do because it is the most complicated industry that anything that humans do.
Speaker 2 And so, you know, when you think about it, right? There's
Speaker 2 one E10, one E11, right? 100 billion transistors
Speaker 2 on leading edge chips, right? Blackwell has 220 billion transistors or something like that. So what is, and those are just on-off switches.
Speaker 2 And then think about every permutation of putting those together, contact ground, et cetera, drain source, blah, blah, blah, with wires, right? There's 15 metal layers, right?
Speaker 2 Connecting every single transistor in every possible arrangement. This is a search space that is literally almost infinite, right?
Speaker 2 You could like, the search space is much larger than any other search base that humans know.
Speaker 1 The search, like, what are you trying to optimize over?
Speaker 2 Well,
Speaker 2 useful compute, right? What is, you know, if you're, if the, if the goal is optimize intelligence per picajoule, right?
Speaker 2
And, and intelligence is some nebulous nature of like the what the model architecture is. Yeah, yeah.
Uh, but and then and then picajoule is like a unit of energy, right? How do you optimize that?
Speaker 2 So there's humongous innovations possible in architecture, right? Because vast majority of the power on a H100 does not go to compute. And there are more efficient, like
Speaker 2 compute,
Speaker 2 you know, AOUs, arithmetic logic unit designs, right?
Speaker 2 But even then, the vast majority of the power doesn't go there, right? The vast majority of the power goes to moving data around, right?
Speaker 2 And then when you look at what is the movement of data, it's either networking or memory, you know, you have
Speaker 2 a humongous amount of movement relative to compute and a humongous amount of power consumption relative to compute. And so the so how can you minimize that data movement and then maximize the compute?
Speaker 2
There are 100x gains from architecture. Even if we like literally stopped shrinking, I think we could have 100x gains from architectural advancement.
Over what time period?
Speaker 2 The question is how much can we advance the architecture, right? The challenge, the other challenge is like the number of people designing chips has not necessarily grown in a long time, right?
Speaker 2 Yeah, like company to company, it shifts, but like within like the semiconductor industry in the U.S., and the U.S.
Speaker 2 makes, you know, designs the vast majority of leading-edge chips, the number of people designing chips has not grown much.
Speaker 2 What has happened is the output per individual has soared because of EDA, electronic design assistance tooling, right? Now, this is all still like classical tooling.
Speaker 2 There's just a little bit of inkling of AI in there yet, right?
Speaker 2 What happens when we bring this in is the question and how you can solve this search base somehow with humans and AI working together to optimize this so it's not most of the the power is data movement.
Speaker 2 And then the logic, the compute is actually very small. To flip side, the compute is, first of all, compute can get like 100x more efficient just with like design changes.
Speaker 2 And then you could minimize that data movement massively, right? So you can get a humongous gain in efficiency just from architecture itself.
Speaker 2 And then process node helps you innovate that there, right? And power delivery helps you innovate that. System design, chip-to-chip networking helps you innovate that, right?
Speaker 2 Like memory technologies, there's so much innovation there. And there's so many different vectors of innovation that people are pursuing simultaneously
Speaker 2 to where like NVIDIA gen to gen to gen will do more than 2x performance per dollar.
Speaker 2 I think that's very clear. And then like hyperscalers are probably going to try and shoot above that, but we'll see if they can execute.
Speaker 1 There's like two narratives you can tell here of how this happens.
Speaker 1 One is that these AI companies who are training the foundation models who understand the trade-offs of like how much is the marginal increase in compute versus memory worth to them and what trade-offs do they want between different kinds of memory.
Speaker 1 They understand this.
Speaker 1 And so therefore the accelerators they build, they can make these sort of trade-offs in a way that's like most optimal or and also design like the architecture of the model itself in a way that
Speaker 1 reflects like what are the hardware trade-offs. Another is NVIDIA because it has like
Speaker 1 I don't know how this works, but presumably they have some sort of like know-how like they're accumulating all this like knowledge about how to better design this architecture and like also better search tools for so on.
Speaker 1 Who has basically like better moat here in terms of will Nvidia keep getting better at design, getting this 100x improvement or will it be like OpenAI and Microsoft and Amazon and Anthropic who are designing their accelerators will keep getting better at like designing the accelerator?
Speaker 2 I think that there's a few vectors to go here, right? One is you mentioned, and I think it's important to note, is that hardware has a huge huge influence on the model architecture that's optimal.
Speaker 2 And so it's not a one-way street that better chip equals, you know,
Speaker 2 the optimal model for Google to run on TPUs, given a given amount of dollars, a given amount of compute, is different architecturally than what it is for OpenAI with NVIDIA stuff, right?
Speaker 2 It is like absolutely different.
Speaker 2 And then like even down to like networking decisions that different companies do and data center design decisions that people do, the optimal, like if you were to say, you know, X amount of compute of TPU versus GPU, compute optimally, what is the best thing, you'll diverge in what the architecture is.
Speaker 2 And I think that's important to know, right?
Speaker 1 Can I ask about that real quick?
Speaker 1 So earlier we were talking about how China has the
Speaker 1 H20s or B20s.
Speaker 1 And there, there's like much less compute per memory bandwidth and like the amount of memory, right?
Speaker 1 Does that mean that Chinese models will actually have like very different architecture and characteristics than American models in the future?
Speaker 2 So you can take this to like a very like large conclusion, like leap and it's like, oh, you know, neuromorphic computing or whatever is like the optimal path, and that looks very different than what a transformer does, right?
Speaker 2 Or you could take it to a simple thing, which is like the level of sparsity,
Speaker 2 like coarse-grained sparsity, IE, like experts and all this sort of stuff.
Speaker 2 The arrangement of
Speaker 2 what exactly the attention mechanism is, because there are a lot of tweaks. It's not just like pure transformer attention, right? Or like, hey, D mod, like how wide versus tall the model is, right?
Speaker 2 That's like very important, like D mod versus number of layers, right?
Speaker 2 These are all like things that like would be different, like, and I, and like, I know they're different between like, say, a Google and an OpenAI and what is optimal.
Speaker 2 But what really, it really starts to get like, hey, if you were limited on a number of different things, like, like China invests humongously in compute and memory,
Speaker 2 you know, which is like basically the memory cell is directly coupled or is the
Speaker 2 compute cell, right? So these are like things that like China is investing hugely.
Speaker 2 And you go to conferences, like, oh, there's 20 papers from Chinese companies/slash universities about computed memory.
Speaker 2 Or, like, you know, hey, like, because the flop limitation is here, maybe NVIDIA pumps up the on-chip memory and like changes the architecture because they still stand to benefit tens of billions of dollars by selling chips to China, right?
Speaker 2 Today, it's just like neutered American chips, right?
Speaker 2 Neutered chips that go to the US, but like, it'll start to diverge more and more architecturally because they'd be stupid not to make chips for China, right? Um,
Speaker 2 and Hua Le obviously, again, like, has like their constraints, right? Like, where are they limited on memory?
Speaker 2 Oh, they have a lot of networking capabilities, and they could move to certain optical networking technologies directly onto the chip much sooner than we could, right?
Speaker 2 Because that is what's optimal for them within their search base of solutions, right? Because this whole area is blocked off.
Speaker 2 It's kind of really interesting to see, to think about the development of how Chinese AI models will differ from American AI models because
Speaker 2 of these changes.
Speaker 2 And it applies to use cases, it applies to data, right? Like American models are very important about like, let me learn from you, right?
Speaker 2 Let me be able to use you directly as a random consumer, right? That is not the case for Chinese model, I assume, right? Because there's probably very different use cases for them.
Speaker 2 China crushes the West at video and image recognition, right?
Speaker 2 At ICML, like Albert Gu at, you know, of Cartesia, like state-space models, like every single Chinese person was like, can I take a selfie with you? Man was harassed.
Speaker 2 In the US, like you see Albert and he's like, it's awesome. He invented state-space models, but it's not like state-space models are like, like here.
Speaker 2 But that's because state-space models potentially have a huge advantage in like video and image and audio, which is like stuff that China does more of and is further along and has better capabilities in.
Speaker 2 So it's like
Speaker 2 because of all the surveillance cameras there. Yeah, that's the quiet part out loud, right? But like there's already divergence and like capabilities there, right?
Speaker 2 Like, you know, if you looked at image recognition, China like destroys American companies, right?
Speaker 2 On that, right? Because
Speaker 2 the surveillance. You have like this divergence in tech tree and like people can like start to design different architectures within the constraints you're given.
Speaker 2 And everyone has constraints, but the constraints different companies have are even different. Right.
Speaker 2 And so like Google's constraints have shown them that they built, they built a genuinely different architecture. But now if you look at like Blackwell and then what's like said about TPv6, right?
Speaker 2 They're, I'm not going to say they're.
Speaker 2 like converging, but they are getting a little bit closer in terms of like, how big is the Matmole unit size and like some of the like topology and like world size of like the scale up versus scale out network.
Speaker 2 Like there is some like convergence slightly, like not saying they're similar yet, but like already they're starting to, but then there's different architectures that people could go down and paths.
Speaker 2 So you see stuff like from all these startups that are trying to go down different tech trees because maybe that'll work. But there's a self-fulfilling prophecy here too, right?
Speaker 2 All the research is in transformers that are very high arithmetic intensity because the hardware we have is very high arithmetic intensity and transformers run really well on GPUs and TPUs.
Speaker 2 And like you sort of have a self-fulfilling prophecy. If all of a sudden you have an architecture, which is theoretically, it's way better, but you can get only like half of the usable FOPs
Speaker 2 out of your chip, it's worthless. Because even if it's 30% compute efficiency win, it took twice, it's half as fast on the chip, right?
Speaker 2 So there's all sorts of like trade-offs and like self-fulfilling prophecies of what do what path do people go down.
Speaker 1 John and Dylan have talked a lot in this episode about how stupefyingly complex the global semiconductor supply chain is.
Speaker 1 The only thing in the world that approaches this level of complexity is the Byzantine web of global payments.
Speaker 1 You're stitching together legacy tech stacks and regulations that differ in every jurisdiction.
Speaker 1 In Japan, for example, a lot of people pay for online purchases by taking a code to their corner store and punching it into a kiosk.
Speaker 2 Stripe abstracts all this complexity away from businesses.
Speaker 1 You can offer customers whatever payment experience they're most likely to use wherever they are in the world. And Stripe is how I invoice advertisers for this very podcast.
Speaker 1 I doubt that they're punching in codes at a kiosk in Japan, but if they are, Stripe will handle it. Anyways, you can head to stripe.com to learn more.
Speaker 1 If you are made head of compute of a new AI lab, if like SSI came to you, the Ilias Discovery new lab, and they're like, Dylan, we give you $1 billion. You are head of compute.
Speaker 1 Like help us get on the map and compete with the Frontier Labs. What is your first step?
Speaker 2 Okay, so the
Speaker 2 constraints are you're a U.S. slash Israeli firm because that's what SSI is, right?
Speaker 2
And your researchers are in the U.S. and Israel.
You probably can't build data centers in Israel because power is expensive as hell and it's probably like risky, maybe. I don't know.
Speaker 2
So still in the U.S., most likely. Most of the researchers are here, or a lot of them are in the U.S., right? Like Paul Altor or whatever.
So I guess... You need a significant chunk of compute.
Speaker 2 Obviously, the whole pitch is you're going to make make some research breakthrough. That's like compute efficiency win, data efficiency win, whatever it is.
Speaker 2 You're going to make some breakthrough, but you need compute to get there, right? Because your GPUs per researcher is your research velocity, right?
Speaker 2 Obviously, like data centers are very tapped out, right?
Speaker 2 Not in terms of tapped out, but like every new data center that's coming up, most of them have been sold, which has led people like Elon to go through this like insane thing in Memphis, right?
Speaker 2 I'm just trying to like, I'm just trying to, I'm just trying to square the circle. Yeah, on that question,
Speaker 1 I kid you not, in my group house, like group chat,
Speaker 1 there have been two separate people who have been like, I have a cluster of H100s and I have like a long lease on them, but I don't, like, I'm trying to get, sell them off.
Speaker 1 Is it like a buyer's market right now? Because it does seem like people are trying to get rid of them.
Speaker 2 So I think, like,
Speaker 2 for, for the Ilya question, it was like a cluster of like 256 GPUs or even 4K GPUs is kind of, it's kind of cope, right? It's not enough, right?
Speaker 2 Yes, you're going to make compute efficiency wins, but with a billion dollars, you probably just want the biggest cluster in one individual spot. sure um and so like small amounts of gpus
Speaker 2 probably not like you know possible to use right like for them right like and that's what most of the sales are right like you go and look at like gpu list or like vast or like foundry like or
Speaker 2 a hundred different gpu resellers the cluster sizes are small now is it a is it a buyer's market yeah last year you would buy h100s for like four dollars or three dollars like if you you know an hour an hour right for shorter term or midterm deals right now it's like, if you want a six-month deal, you could get like $2.15 or less, right?
Speaker 2 Like, and like the natural cost if I as, if I have a data center, right? And I'm paying like standard data center pricing to purchase the GPUs and deploy them is like $1.40.
Speaker 2 And then you add on the debt because I probably took debt to buy the GPUs or cost equity, cost of capital, gets up to like $1.70 or something, right?
Speaker 2 And so you see deals that are like the good deals, right? Like Microsoft renting from Core Weaver, like $1.90 to $2, right?
Speaker 2 So people are getting closer and closer to like, there's still a lot of profit, right? Because the natural rate, even after debt and all this, is like $1.70.
Speaker 2 So there's still a lot of profit when people are selling in the low twos, like GPU companies, people deploying them. But it is a buyer's market in a sense that it's gotten a lot cheaper.
Speaker 2 But the cost of compute is going to continue to tank, right? Because it's like sort of like, I don't remember the exact name of the law, but
Speaker 2 it's effectively Moore's Law, right? Every two years, the cost of transistors halved, and yet the industry grew, right?
Speaker 2 Every six months or three months, the cost of intelligence, you know, like OpenAI and GPT, GPT-4,
Speaker 2 what, February 2023, right? $120 per million tokens or something like that was roughly the cost. And now it's like $10,
Speaker 2 right?
Speaker 2 It's like the cost of intelligence is tanking, partially because of compute,
Speaker 2 partially because the model's compute efficiency wins, right? I think that's a trend we'll see.
Speaker 2 And then that's going to drive adoption as you scale up and make it cheaper and scale up and make it cheaper. Right, right.
Speaker 1 Anyways, what you're saying, if you're a head of computer of SSI, okay, head of computer SSI.
Speaker 2 There's obviously no free data center lunch, right? In terms of, you know, and you can just, you know, take that based on like the data we see. We have shows that there's no free lunch per se.
Speaker 2 Like immediately today, you need the compute for a large cluster size or even six months out, right? There's some, but like not a huge amount because of what X did, right?
Speaker 2 XAI is like, oh, shit, we're going to go like, we're going to go.
Speaker 2 buy a Memphis factory,
Speaker 2 put a bunch of like generators outside, like mobile generators usually reserved for like natural disasters, test the battery pack, draw as much power as we can from the grid, tap the natural gas line that's going to the natural gas plant, like two miles away, the gigawatt natural gas plant, and like just like send it and like get a cluster built as fast as possible.
Speaker 2 Now you're running 100k GPUs, right?
Speaker 1 I know.
Speaker 2 And that cost that cost about 5 billion, right? 4 billion, right? Not not 1 billion. So the scale that SSI has is much smaller, by the way, right?
Speaker 2 So their size of cluster will be, you know, maybe one-third or one-fourth of the size, right? So, now you're talking about 25 to 32K cluster, right? There,
Speaker 2 you still don't have that, right? No one is willing to rent you a 32K cluster today, no matter how much money you have, right?
Speaker 2 Even if you had more than a billion dollars, so you now it makes the most sense to build your own cluster, one, uh, instead of renting it, or get a very close relationship, like a OpenAI Microsoft with Core Reave or OpenAI Microsoft with Oracle/slash Crusoe.
Speaker 2 The next step is Bitcoin, right? Um,
Speaker 2 so OpenAI has a data center in Texas, right? Or it's going to be their data center. It's like they're kind of contracted and all that.
Speaker 2 Core weave, there is a 300-megawatt natural gas plant on site powering these crypto mining data data centers from the company called Core Scientific. And so they're just converting that.
Speaker 2 There's a lot of conversion, but like the power is already there, the power infrastructure is already there.
Speaker 2 So it's really about converting it, getting it ready to be water-cooled, all that sort of stuff, and convert it to a 100,000 GB200 cluster.
Speaker 2 And they have a number of those going up across the country, but that's also like
Speaker 2 tapped out to some extent because NVIDIA is doing the same thing in Plano, Texas, for a 32,000 GPU cluster that they're building. Is that NVIDIA is doing that?
Speaker 2 Well, they're going through partners, right? Because this is the other interesting thing: the big tech companies can't do crazy shit like Elon did. Why?
Speaker 2 ESG.
Speaker 2 Oh, interesting. They can't just do crazy shit like, because this.
Speaker 1 Do you expect Microsoft and Google and whoever to like drop their net zero commitments as the scaling picture intensifies.
Speaker 2 Yeah, yeah. So
Speaker 2 like this, this like this like what XAI is doing, right, is like, it's not that polluting, you know, on the scheme of things, but it's like you have 14 mobile generators and you're just burning natural gas on site on these like mobile generators that sit on trucks, right?
Speaker 2 And then you have like power directly two miles down the road. There's no unequivocal way to say any of the power is because
Speaker 2 two miles down the road is a natural gas plant as well, right? There's no way to say this is like green.
Speaker 2 You go to the core weave thing is a natural gas plant is literally on site from Core Scientific and all that, right? And then the data centers around it are horrendously inefficient, right?
Speaker 2 There's this metric called POE, which is basically how much power is brought in versus how much gets delivered to the chips, right?
Speaker 2 And like the hyperscalers, because they're so efficient or whatever, right?
Speaker 2 Their POE is like 1.1 or lower, right? I.e., if you get a gigawatt in, 900 megawatts or more gets delivered to chips, right? Not wasted on cooling and all these other things.
Speaker 2 This like core scientific one is going to be like 1.5 1.6 i.e even i have 300 megawatts of generation on site i only deliver like 180 200 megawatts to the chips given how fast solar is getting cheaper and also the fact that like you know how
Speaker 1 the reason solar is difficult elsewhere is like you know you're like you got to like power the homes at night um here i guess it's like theoretically possible to like uh figure out you know only like run the clusters in the way in the day or something absolutely not that really that that's not possible because because it's so expensive to have these GPUs.
Speaker 2 Yeah, so like when you look at the power cost of a large cluster, it's trivial and to some extent, right? Like,
Speaker 2 you know, like the meme that like, oh, you know, you can't build a data center in Europe or East Asia because the power is expensive, that's not really relevant. What's the reason?
Speaker 2 Or power is so cheap in China and the U.S., that's why the only places you can build data centers. That's not really the real reason.
Speaker 2 It's the ability to generate new power for these activities is why it's really difficult. And the economic regulation around that.
Speaker 2 But real thing is, like, if you look at the cost of ownership of a GP of an H100, let's just say you gave me, you know, a billion dollars and I already have a data center, I already have all this stuff.
Speaker 2 I'm paying regular rates for the data centers, I'm not paying through the nose or anything, paying regular rates for power, not paying through the nose. Power is sub 15% of the cost.
Speaker 2 And it's sub 10% of the cost, actually, right? The biggest, like 75 to 80% of the cost is just the servers, right?
Speaker 2 And this is on a like a multi-year, including debt financing, including cost of operation, all that, right?
Speaker 2 Like when you do a TCO, total cost of ownership like it's like 80 is the gpus 10 is the data center 10 of the power rough rough numbers right so it's like kind of irrelevant right whether or not you like like how expensive the power is right yeah you'd rather do what taiwan does right when like power like what do they do when when there was droughts right they like like force people to not shower
Speaker 2 They basically reroute the power from when there was a when there was a power shortage in Taiwan, they basically rerouted power from the residentials.
Speaker 2 And this will happen in a capitalistic society as well, most likely, because like
Speaker 2 fuck you, like, why are you not going to pay X dollars per kilowatt hour? Because to me, the marginal cost of power is irrelevant. Really, it's all about the GPU cost and the ability to get the power.
Speaker 2 I don't want to turn it off eight hours a day.
Speaker 1 Maybe let's discuss what would happen if the training regime changes and if it doesn't change.
Speaker 1 So, like, you could imagine that the training regime becomes much more parallelizable, where it's like about coming up with some sort of like search or synthetic.
Speaker 1 Like, most of the compute for training is used to come up with synthetic data or do some kind of search, and that can happen across a wide area.
Speaker 1 In that world, how fast could we scale?
Speaker 1 Let's go through the numbers on like year after year. And then suppose it actually has to be,
Speaker 1 you would know more than me, but like suppose it has to be the current regime and like just explain what that would mean in terms of like how distributed that would have to be and then how plausible it is to get clusters of certain sizes over the next few years.
Speaker 2 I think it like is not too difficult for Ilias company to get a cluster of like 32K and then like of Blackwell.
Speaker 1 Like 2025, 2026, 2022.
Speaker 2 2025, 2026, there's
Speaker 2 before I like talk about like the US, I think it's like important to note that there's like a gigawatt plus of data center capacity in Malaysia next year.
Speaker 2 Now that's like mostly by dance, but like there's like, you know, and power-wise, there's like there's the humongous damming of the Nile in Ethiopia, and the country uses like one-third of the power that that dam generates.
Speaker 2 So there's like a ton of power there to like.
Speaker 1 How much power does that dam generate?
Speaker 2 Like, it's like over a gigawatt.
Speaker 2 And the country consumes like 400 megawatts or something trivial. And
Speaker 1 are people bidding for that power?
Speaker 2 I think people just don't think they can build a data center in fucking Ethiopia. Why not? I don't think the dam is filled yet, is it? I mean, they have to, like, the dam could generate that power.
Speaker 2
They just don't. Oh, God.
Right? Like, there's a little bit more equipment required, but that's like not too hard.
Speaker 2 Why don't they? Yeah.
Speaker 2 I think there's like
Speaker 2 true security risks, right? If you're China or if you're the US lab, like to build a fucking data center with all your IP in fucking Ethiopia. Like you want AGI to be in Ethiopia?
Speaker 2 Like you want it to be that accessible. Like people you can't even monitor like being the technicians in the fucking data center or whatever, right? Or like powering the data center, all these things.
Speaker 2 Like there's so many like, you know, things you could do to like, you could just destroy every GPU in a data center if you want, if you just like fuck with the grid, right?
Speaker 2 Like pretty, pretty like easily, I think.
Speaker 1 People talk a lot about the Middle East.
Speaker 2 There's a 100KA GB200 cluster going up in the Middle East, right?
Speaker 2 um and the u.s like there's like clearly like stuff the us is doing right like uh the you know um g42 is the uae data center company cloud company their ceo is a chinese national or not a chinese he's chinese basically chinese allegiance but uh open i i think open ai wanted to use the data center from them but instead like the us forced microsoft to like i feel like this is what happened is forced microsoft to like do a deal with them um so that g42 has a 100k gpu cluster but microsoft is like administering and operating for security reasons, right?
Speaker 2 And there's like Omniva in like Kuwait, like the Kuwait super rich guy spending like $5 plus billion dollars on data centers, right?
Speaker 2 Like you just go down the list, like all these countries, Malaysia has, you know,
Speaker 2 you know, $10 plus billion dollars of like data center, you know, AI data center build outs over the next couple of years, right?
Speaker 2 Like, and you know, go to every country, it's like this stuff is happening, but on the grand scheme of things, the vast majority of the compute is being built in the US and then China and then like Malaysia, Middle East, and like rest of the world.
Speaker 2 And if you're in the, you know, going back to your point, right, like you have synthetic data, you have like this search stuff, you have like
Speaker 2 you have all these post-training techniques,
Speaker 2 you have all this, you know, all this ways to soak up flops, or you just figure out how to train across multiple data centers, which I think they have.
Speaker 2 At least Microsoft and OpenAI have figured out OpenAI is fine. What makes you think they figured it out?
Speaker 2 Their actions.
Speaker 2 So
Speaker 2 Microsoft has signed deals north of $10 billion with fiber companies to connect their data centers together. There are some permits already filed to show people are digging
Speaker 2 between certain data centers. So we think with fairly high accuracy,
Speaker 2 we think that there's five data centers, massive, not just five data centers, sorry, five like regions that they're connecting together, which comprises of many data centers, right?
Speaker 2 What will be the total power usage of the depends on the time, but easily north of a gigawatt, right?
Speaker 1 Which is like close to a million GPUs.
Speaker 2 Well, each GPU is getting more power, higher power consumption too, right?
Speaker 2 Like it's like, you know, the rule of thumb is like GPUs, H100 is like 700 watts, but then like total power per GPU all in is like 1200, 1300 watts, 1400 watts. But next generation NVIDIA GPUs are,
Speaker 2 it's 1200 watts for the GPU, but then it actually ends up being like 2,000 watts all in, right? Like so there's a little bit of scaling of power per GPU, but like you already have 100K cluster, right?
Speaker 2 OpenAI in Arizona, XAI in Memphis, and many others already building 100K clusters of H100s.
Speaker 2 You have multiple, at least five, I believe, GB200, 100K clusters being built by Microsoft slash OpenAI slash partners for them.
Speaker 2 And then potentially even more, 500K GB200s, right, is a gigawatt, right? And that's like online next year, right?
Speaker 2 And like the year after that, if you aggregate all the data center sites and like how much power and you only look at net ads since 2022 instead of like the total capacity at each data center, then you're still like north of multi-gigawatt, right?
Speaker 2 And so they're spending $10 plus billion dollars on these fiber deals with a few fiber companies, Lumens, AO, like you know, a couple other companies.
Speaker 2 And then they've got all these data centers that they're clearly building 100K clusters on, right?
Speaker 2 Like old crypto mining site with Core Weave in Texas, or like this Oracle Crusoe in Texas, and then like in Wisconsin and Arizona and a couple other places.
Speaker 2 There's a lot of data centers being built up
Speaker 2 and providers, right? QTS and Cooper and like, you know, you go down the list, there's like so many different providers and self-build, right? Data centers, I'm building myself.
Speaker 1 So, so
Speaker 1 let's just like give the number on like, okay, 2025, Elon's cluster is going to be the big, like, it doesn't matter who it is.
Speaker 2 So, so then there's a definition game, right? Like, Elon claims he has the largest cluster at 100k GPUs because they're all fully connected.
Speaker 1 Other than who it is, like, I just want to know, like,
Speaker 1 how many, like,
Speaker 1 I don't know if it's better to denominate and 100,000 GPUs this year, right?
Speaker 1 For the biggest cluster. For the biggest cluster.
Speaker 2 Next year. Next year, 300 to 500,000,
Speaker 2 depending on whether it's one site or many, right? 300 to like 700,000, I think, is the upper bound of that. But anyways, like, you know, there's,
Speaker 2 it's about like when they tier it on, when they can connect them, when the fibers connect it together. Anyways,
Speaker 2
300 to like seven, 500,000, let's say, but those GPUs are two to 3x faster, right? Versus the 100K cluster. So on an H100 equivalent basis, you're at a million chips next year.
But one cluster.
Speaker 2
By the end of the year, yes. No, no, no.
Well, so one cluster is like, like, but you know what I mean? The wishy-washy definition, right? Multi-site, right? Can you do multi-site?
Speaker 2 What's the efficiency loss when you do a multi-site? Is it possible at all? I truly believe so.
Speaker 2 Whether it's whether what's the efficiency loss is a question, right?
Speaker 1 Okay, would it be like 20% loss, 50% loss?
Speaker 2 Great question. This is where, like, you know, this is where you need like the secrets, right? Of like, and Anthropic's got similar plans with Amazon, and you go down the list, right?
Speaker 2 Like, people are going to be able to do that.
Speaker 1 And then the year after that.
Speaker 2 The year after that is where.
Speaker 1 This is 2026.
Speaker 2 2026, there is a single gigawatt site, and that's just part of the like multiple sites, right?
Speaker 1 For Microsoft, the Microsoft five gigawatt thing happens in 2022.
Speaker 2 One gigawatt one site in 2026, but then you have
Speaker 2 a number of others.
Speaker 2 You have five different locations, each with multiple, some with multiple sites, some with a single site. You're easily north of two, three gigawatts.
Speaker 2 And then the question is, can you start using the old chips with the new chips?
Speaker 2 And like, the scaling, I think, is like you're going to continue to see flop scaling like much faster than people expect. I think, as long as the money pours in, right?
Speaker 2 Like, that's the other thing is like, there's no fucking way you can pay for the scale of clusters that are being planned to be built next year for OpenAI unless they raise like 50 to 100 billion dollars,
Speaker 2
which I think they will raise that like end of this year, early next year. 50 to 100 billion? Yes.
Are you kidding me? No. Oh, my God.
This is like, you know, like Sam has a superpower, no?
Speaker 2 Like, it's like, it's like recruiting and like raising money. That's like what he's like a god at.
Speaker 1 Will ships themselves be a bottleneck to the scaling?
Speaker 2 Not in the near term.
Speaker 2 It's more again back to the concentration versus decentralization point.
Speaker 2 Because like the largest cluster is 100,000 GPUs, Nvidia is manufactured close to 6 million hoppers, right? Across last year and this year. So that's fucking tiny, right?
Speaker 1 But then, but why is Sam talking about a 7 trillion to build foundries and whatever?
Speaker 2 Well, this is this, you know, like...
Speaker 2 Draw the line, right? Like log, log lines. Let's fuck number goes up, right? You know, if you do, if you do that, right?
Speaker 2 Like, you're going from 100K to 300 to 500K, where the equivalent is a million, you just 10x year on year. Do that again, do that again, or more, right? If you increase the pace of your mind.
Speaker 1 What is do that again? So like 2026, like the number of H100.
Speaker 2 If you try and, you know, if you increase the globally produced flops by like 30X
Speaker 2 year on year or 10x year on year, and the cluster size grows or the cluster size grows by, you know, three to five to seven X, and then you do your start, you get multi-site going better and better and better, you can get to the point where multi-million chip clusters, i.e.
Speaker 2 they're connected, even if they're like regionally not connected right next to each other,
Speaker 2 are right there.
Speaker 1 And in terms of flops, it would be 1E, what?
Speaker 2 1E30. 28?
Speaker 2 1E30 is like very possible, like 28, 29.
Speaker 1 Wow. Yeah.
Speaker 1
And 1E30, you said, by 28, 29. Yeah.
And so that is literally six orders of magnitude.
Speaker 1 That's like 100,000 times more compute than GBD4.
Speaker 2 The other thing to say is like the way you count flops on training run is really stupid. Like you can't just do like active parameters times tokens times six, right?
Speaker 2 Like that's that's really dumb because like the paradigm as you mentioned, right, is like, and you've had many great podcasts on this, like synthetic data and like RL stuff, post-training, like verifying data and like all these things, generating and throwing it away, like all sorts of stuff, search, like inference time compute, all these things like aren't counted.
Speaker 2 in the training flops. So you can't like say 1830 is a really stupid number to say because by then
Speaker 2 the actual flops of the pre-training may be X, but the data to generate
Speaker 2 for the pre-training may be
Speaker 2 way bigger, or the search inference time may be way, way bigger, right? Right.
Speaker 1 But also,
Speaker 1 because you're doing the sort of adversarial synthetic data where the thing you're weak is that you can make synthetic data for that, it might be way more sample efficient.
Speaker 1 So, like, even though the pre-training flops will be irrelevant, right?
Speaker 2 Like, I actually don't think pre-training flops will be 1E30. I think more reasonably, it'll be like the total summation of the flops that you deliver to the model
Speaker 2 across pre-training, post-training, synthetic data for that pre-training data and post-training data, as well as like some of the inference time compute efficiencies could be like, it's more like 1E30, right?
Speaker 1 So suppose you really do get to the world where it's worth investing.
Speaker 1 Okay, actually, if you're doing 1E30,
Speaker 1 is that like a trillion-dollar cluster, $100 billion cluster?
Speaker 2 I think it'll be like
Speaker 2 multi-hundred billion dollars.
Speaker 2 But then like, it'll be like, I like truly believe people are gonna be able to use their prior generation clusters and alongside their new generation clusters um and obviously like you know smaller batch sizes or whatever right like or use that to generate and verify data all these sorts of things and then for 2030 um
Speaker 1 right now i think five percent of uh tsmc's and five is nvidia or like whatever percentage it is
Speaker 1 by by 2028 what percentage will it be um
Speaker 2 again this is like a question of like how scale-pilled you are and how much money will flow into this and how you think progress works works like will models continue to get better or does the line like not does the line slope over i believe it'll like continue to like skyrocket in terms of capability in that world in that world why why wouldn't like of not a five nanometer but like of two nanometer a 16 a 14 these are the nodes that'll be in that time frame of 2028 used for ai i could see like 60 70 80 of it like yeah no problem given the fabs that are like currently planned and are currently being built that is is that enough for the 1830 or will i think so yeah so so then like the chip code doesn't make any sense
Speaker 1 because like they're like the chip code stuff about like we don't have enough computers so no i think i think like
Speaker 2 the plans of tsmc on two nanometer and such are like quite aggressive for a reason right like to be clear
Speaker 2
Apple, which has been TSMC's largest customer, does not need how much two nanometer capacity they're building. They will not need A16.
They will not need A14, right?
Speaker 2 Like you go down the list, it's like Apple doesn't need this shit, right?
Speaker 2 Although they did just hire one of Google's head of system design for TPU, but you know, so they are going to make an accelerator, but you know, that's besides the point, an AI accelerator, but that's besides the point, like Apple doesn't need this for their business, which they have been 25% or so of TSMC's business for a long time.
Speaker 2 And when you, when you just zone in on just the leading edge, they've been like more than half of the newest node or 100% of the
Speaker 2 newest node almost constantly. That like go, that paradigm goes away, right?
Speaker 2 If you believe in scaling and you believe in like that models get better, the new models will generate, you know, infinite, not infinite, but like amazing productivity gains for the world and so on and so forth.
Speaker 2 And if you believe in that world, then like TSMC needs to act accordingly and the amount of silicon that gets delivered needs to be there. So 25, 26, TSMC is definitely there.
Speaker 2 And then on a longer time scale, the
Speaker 2 industry can be ready for it. But it's going to be a constant game of like, you must convince them constantly that they must do this.
Speaker 2 It's not like a simple game of like, oh, you know, if people work silently, it's not going to happen, right?
Speaker 2 Like there has to, they have to see the demonstrated growth over and over and over and over again on across the industry.
Speaker 2 And
Speaker 1 to see investors or companies or who are.
Speaker 2 More so like TSMC needs to see NVIDIA volumes continue to grow straight up, right? And, oh, and Google's volumes continue to grow straight up and, you know, go down the list.
Speaker 2 Chips in the near term, right? Next year, for example, are less of a constraint than data centers, right?
Speaker 2 And likewise for 2026.
Speaker 2 The question for 27, 28 is like, you know, always when you grow super rapidly, like people want to say,
Speaker 2 that's the one bottleneck, because that's the convenient thing to say. And in 2023, there was a convenient bottleneck, Coas, right?
Speaker 2 The picture's gotten much, much like cloudier, not cloudier, but we can see that like, you know, HPM is a limiter too. Coas is as well, Coas L especially, right?
Speaker 2 Data centers, transformers, substations, like all like power generation, batteries, like UPSs, like CRHs, like water cooling stuff, like all of this stuff is now limitations next year and the year after.
Speaker 2 Fabs are in 26, 27, right? Like, you know, things will get like cloudy because like the moment you unlock one, oh, like only 10% higher, the next one is the thing.
Speaker 2
And only 20% higher, the next one is the thing. So today, like, data centers are like four to five percent of total US.
Of total U.S. When you think about like as a percentage of U.S.
Speaker 2 power, that's not that much, but when you think U.S. power has been like this and now you're like this, but then you also flip side, you're like, oh, all this coal's been curtailed.
Speaker 2
All these like, oh, there's so many like different things. So like power is not that crazy on a like glow on a national basis.
On a localized basis, it is because it's about the delivery of it.
Speaker 2 Same with the substation transformer supply chains, right? It's like these companies have operated in an environment where the U.S. power is like this or even slightly down, right?
Speaker 2 And it's like kind of been like, you know, like that because of efficiency gains, because of, you know, you know, so anyways, like there have been humongous like
Speaker 2 weakening of the industry.
Speaker 2
But now all of a sudden, if you tell that industry, your business will triple next year if you can produce more. Oh, but I can only produce 50% more.
Okay, fine.
Speaker 2 Year after that, now we can produce 3x as much, right? You do that to the industry. The U.S.
Speaker 2 industrial base, as well as the Japanese, as well as like, you know, all across the world can get revitalized much faster than people realize, right? Like.
Speaker 2 I truly believe that people can innovate when given the like need to. It's one thing if it's like, this is a shitty industry where my margins are low and we're not growing really.
Speaker 2 And like, you know, blah, blah, blah, blah, blah. To all of a sudden, oh, this is the sex.
Speaker 2 I'm in power, and I'm like, this is the sexiest time to be alive. And like, we're, we're going to do all these different plans and projects, and people have all this demand.
Speaker 2 And they're like begging me for another percent of efficiency advantage because that gives them another percent to deliver to the chips.
Speaker 2 Like, all these things, or 10%, or whatever it is, like, you see all these things happen, and innovation is unlocked. And,
Speaker 2 you know, you also bring in like AI tools, you bring in like all these things. Innovation will be unlocked.
Speaker 2 Production capacity can grow not overnight, but it will on six months, 18 months, three-year time scales. It will grow rapidly.
Speaker 2 And you see the revitalization of these industries.
Speaker 2 So, but I think like getting people to understand that, getting people to believe, because, you know, if we pivot to like, you know, I'm telling you that Sam's going to raise 50 to 100 billion dollars because he's telling people he's going to raise this much, right?
Speaker 2 Like literally having discussions with sovereigns and like
Speaker 2 Saudi Arabia and like the Canadian pension fund and like, not these specific people, but like the biggest investors in the world.
Speaker 2 And of course, Microsoft as well, but like he's literally having these discussions because they're going to drop their next model or they're going to show it off to people and raise that money.
Speaker 2 But, but if this is their plan, if these sites are already planned and like they're not there, right? So, how do you plan?
Speaker 1 How do you like plan a site without today?
Speaker 2 Microsoft is taking on immense credit risk, right? Like, they've signed these deals with all these companies to do this stuff, but Microsoft doesn't have, I mean, they could pay for it, right?
Speaker 2 Microsoft could pay for it on the current time scale, right?
Speaker 2 Oh, what's what's you know, their capex going from 50 billion dollars to 80 billion dollars direct capex, and then another 20 billion across like oracle core weave you know and then like another like 10 billion across their data center partners they can afford that right to next year right but then
Speaker 2 that doesn't you know like this is because microsoft truly believes in open ai they may have doubts like holy shit we're taking on our credit risk you know obviously they have to message wall street all these things but they are not like that's like affordable for them because they believe they're a great partner to open ai that they'll take on all this credit risk now now obviously open ai has to deliver they have to make the next model right?
Speaker 2
That's way better. And they also have to raise the money.
And I think they will, right? I truly believe from like how amazing 4.0 is, how small it is relative to 4.
Speaker 2
The cost of it is so insanely cheap. It's much cheaper than the API prices lead you to believe.
And you're like, oh, what if you just make a big one?
Speaker 2 It's like very clear what's going to happen to me on the next jump that they can then raise this money and they can raise this capital from the world.
Speaker 2 This is intense, Daryl. That's very intense.
Speaker 1 John,
Speaker 1 actually, if he's right, or I don't know, not him, but like in general,
Speaker 1 if the capabilities are there, the revenue is there.
Speaker 2 Revenue doesn't matter.
Speaker 2 Revenue matters.
Speaker 1 Is there any part of that picture that still seems wrong to you in terms of displacing so much of TSMC production, wafers and
Speaker 1 power and so forth? Does any part of that seem wrong to you?
Speaker 2
I can only speak to the semiconductor part, even though I'm not an expert. But I think the thing is, like, TSMC can do it.
Like, they'll do it. I just wonder,
Speaker 2
he's right in that, in a sense, that 24, 25, that's covered. Yeah.
But 26, 27, that's that secret point where you have to say,
Speaker 2 can the semiconductor industry and the rest of the industry be convinced that this is where the money is? Like where's money is? Like, and that means, is there money? Is there money by 24, 25?
Speaker 1 How much, how much revenue do you think the AI industry needs by 25 in order to keep scaling? Doesn't matter.
Speaker 2
Compared to smartphones. Compared to smartphones.
I know he says it doesn't matter. I'll get to a lot.
Speaker 2 You keep, I know.
Speaker 1 Hey, what are smartphones? Like, Apple's revenue is like $200-something billion dollars. So, like.
Speaker 2 Yeah, it needs to be another smartphone-sized smartphone-sized opportunity, right? Like, even the smartphone industry didn't drive this sort of growth. Like, it's kind of crazy, don't you think?
Speaker 2 So, so, today, so far,
Speaker 2 only the only thing I can really perceive, yeah, girlfriend, but like,
Speaker 2
but you know what I mean. It's not there.
I want a real one, Debbie.
Speaker 2 Um, so, so, so, like, a few things, right? The return on invested capital for all of the big tech firms is up since 2022. Yeah.
Speaker 2 Um, and therefore, it's clear as day that them investing in AI has been fruitful so far, right?
Speaker 2 For the big tech firms.
Speaker 2
Return on invested capital. Like financially, you look at metas, you look at Microsoft's, you look at Amazon's, you look at Google's.
The return on invested capital is up since 2022.
Speaker 2
So it's on AI in particular? No, just generally as a company. Now, obviously, there's other factors here.
Like, what is meta's ad efficiency? How much of that is AI, right? Super messy.
Speaker 2
That's a super messy. Super messy thing.
But here's the other thing. This is Pascal's wager, right? This is a matrix of like, do you believe in God? Yes or no?
Speaker 2 If you believe in God, yes or no, like hell or heaven, right? So if you believe in God and God's real and you go to heaven, that's great. That's fine, whatever.
Speaker 2 If you don't believe in God and God is real, then you're going to hell.
Speaker 1 This is the deep technical analysis you'll subscribe to semi-analysis for.
Speaker 2 This should be ripping. Can you imagine what happens to the stock if Satya starts talking about Pascal's wager?
Speaker 2 No, no, but this is psychologically what's happening, right? This is a, if I don't, and Satya said it on his earnings call, the risk of underinvesting is worse than the risk of overinvesting.
Speaker 2
He said this word for word. This is Pascal's wager.
I must believe I am AGI pilled because if I'm not and my competitor does it, I'm absolutely fucked. Okay, other than Zuck, who
Speaker 2 Sundar said this
Speaker 2
on the earnings call. So Zuck said it.
Sundar said it. Satcha's actions on credit risk for Microsoft do it.
He's very good at PR and like messaging. So he hasn't like said it so openly, right?
Speaker 2
Sam believes it. Dario believes it.
You look across these tech Titans, they believe it. And then you look at the capital holders.
The UAE believes it. Saudi believes it.
How do you know the UAE
Speaker 2 believes it? Like, all these major companies and capital holders also believe it because they're putting their money here. But
Speaker 2
how can, like, it won't last. It can't last unless there's money coming in somewhere.
Correct, correct. But then the question is,
Speaker 2
the simple truth is, like, GPD-4 costs like $500 million to train. I agree.
And it has generated billions in reoccurring revenue.
Speaker 2 But in that meantime, OpenAI raised $10 billion or $13 billion and is building a, you know, a model that costs that much effectively, right? Right.
Speaker 2 And so then, obviously, they're not making money. So, what happens when they do it again? They release and show GPT-5
Speaker 2 with whatever capabilities that make everyone in the world like, holy fuck, obviously, the revenue takes time after you release the model to show up.
Speaker 2 You still have only a few billion dollars or $5 billion of revenue run rate.
Speaker 2 You just raise $50 to $100 billion because everyone sees this, like, holy fuck, this is going to generate tens of billions of revenue. But that tens of billions takes time to flow in, right?
Speaker 2 It's not an immediate click, but the time where Sam can convince, and not just Sam, but like people's decisions to spend the money are being made are then, right?
Speaker 2 Like, so therefore, like, you look at the data centers people are building, you don't have to spend most of the money to build the data center.
Speaker 2 Um, most of the money is the chips, but you're already committed to, like, oh, I'm just gonna have so much data center capacity by 2027 or 2026 that it's I'm never gonna need to build a data center again for like three, four, five years if AI is not real, right?
Speaker 2
That's like basically what all their actions are. Or I can spend over $100 billion on chips in 26 and I can spend over 100 billion dollars on chips in 27.
All right.
Speaker 2 So this is these are the actions people are doing.
Speaker 2 And the lag on revenue versus when you spend the money or raise the money, raise the money, spend the money, build, you know, there's like a lag on this. So this is like,
Speaker 2 you don't necessarily need the revenue in 2025 to support this. You don't need the revenue in 2026 to support this.
Speaker 2 You need the revenue in 25, 26 to support the $10 billion that OpenAI spent in 23 or Microsoft spent in 23 slash early 24 to build the cluster, which then they trained the model in mid-20, you know, for early 24, mid-24, which they then released at the end of 24, which then started generating revenue in 25, 26.
Speaker 2 I mean, like, the only thing I can say is that you look at a chart with three points on a graph, GPT-1, 2, 3, and then you're like, and even that graph is like, the investment you have to make in GPT-4 over GPT-3 is 100X.
Speaker 1 The investment you had to make in GPT-5 over GPT-4 is 100. Like, so revenue, if like, um, currently the ROI could be positive, but like, um, and this very well could be true.
Speaker 1 I think it will be true, but like,
Speaker 1 the, um, the revenue has to like increase exponentially, not just like, yeah, you know, transfer.
Speaker 2 Of course,
Speaker 2
I agree with you, but I also agree in Dylan that it can be achieved. ROI, like semiconductor, TSMC does this.
Invests $16 billion. It expects a ROI does that, right? That's, I understand that.
Speaker 2
That's, that's fine. Lag, all that.
The thing that I don't expect is that GPT-5 is not here. It's all dependent on GPT-5 being good.
If GPT-5 sucks, if GPT, GPT-5 looks like,
Speaker 2 it doesn't blow people's socks off, this is all void. What kind of socks are you wearing, bro?
Speaker 2 Show them
Speaker 2 AWS.
Speaker 2
GPT-5 is not here. It's late.
We don't know. I don't think it's late.
I think it's late.
Speaker 1 I want to zoom out and go back to the end of the decade picture again. So if you're, if this picture you're related.
Speaker 2 Oh, no, no, no, no, no, we've already lost John.
Speaker 2 We've already already accepted GPT-5 would be good. Hello? But yeah, you got it, you know?
Speaker 2 Like, bro, like, life is so much more fun when you just like are delusionally like
Speaker 2 we're just ripping bongs, are we?
Speaker 2 When you feel the AGI, you feel your soul. This is why I don't live in San Francisco.
Speaker 2 I have tremendous belief in like GPT-5
Speaker 2 area because, like, what we've seen already. Um, I think the public signs all show that this is like very much the case, right? Uh, what we see with
Speaker 2 beyond that is more questionable, and I'm not sure because I don't know what, I don't know, right? Like, I don't know.
Speaker 2 We'll see how, like, how much they progress. But if like things continue to improve, life continues to radically get reshaped for many people.
Speaker 2 It's also like every time you increment up the intelligence, the amount of usage of it grows hugely. Every time you increment the cost down of
Speaker 2 that amount of intelligence, the amount of usage increases massively. As you continue continue to push that curve out, that's what really
Speaker 2 matters, right? And
Speaker 2 it doesn't need to be today, it doesn't need to be a revenue versus like how much CapEx in any time in the next few years.
Speaker 2 It just needs to be, did that last humongous chunk of CapEx make sense for OpenAI or whoever the leader was? Or, and then how does that flow through? Right.
Speaker 2 Or were they able to convince enough people that they can raise this much money, right? Like, you think Elon's tapped out of his network with raising $6 billion? No.
Speaker 2 XAI is going to be able to raise 30 plus easily. right? I think so.
Speaker 2 You think Sam's tapped out? You think Anthropic's taped out? Anthropics barely even diluted the company relatively, right? Like, you know, there's a lot of capital to be raised in just from like,
Speaker 2 call it FOMO if you want, but like during the dot-com bubble, people were spending, the private industry flew through like $150 billion a year. We're nowhere close to that yet, right?
Speaker 2 We're not even close to the dot-com bubble, right? Why would this bubble not be bigger, right?
Speaker 2 And if you go back to the prior bubbles, PC bubble, semiconductor bubble, mechatronics bubble, throughout the US, each bubble is smaller. You know, you call it a bubble or not.
Speaker 2 Why wouldn't this one be bigger?
Speaker 1 How many billions of dollars a year is this bubble right now?
Speaker 2 For private capital? Yeah. It's like 55, 60 billion so far
Speaker 2 for this year.
Speaker 2 It can go much higher, right? And I think it will next year.
Speaker 2 Okay, so
Speaker 2 let me think of.
Speaker 2 Need another bong rip.
Speaker 2 You know, at least like finishing up and looping into the next question was like,
Speaker 2 you know, prior bubbles also didn't have the most profitable companies that humanity has ever created investing, and they were debt financed. This is not debt financed yet, right?
Speaker 2
So that's the last like little point on that one. Whereas the 90s bubble was like very debt financed.
This is like disastrous for those companies. Yeah, sure, but it was
Speaker 2 so many, so much was built, right?
Speaker 2 You know, you got to blow a bubble to get real stuff to be built.
Speaker 1 It is an interesting analogy where like
Speaker 1 with even though the dot-com bubble obviously burst and like a lot of companies went bankrupt, they in fact did lay out the infrastructure that enabled the web and everything.
Speaker 1 So you could imagine in AI, it's like some a lot of the foundation, a lot of companies, or whatever, like a bunch of companies will like go bankrupt, but like they will
Speaker 1 enable the singularity.
Speaker 2 During the 1990s, at the turn of the 1990s, there was an immense amount of money invested in like memes and like opticals, optical technologies because everyone expected the fiber bubble to continue, right?
Speaker 2 That all ended at 2003, 2002, issue event, right? And that started in 94. It hasn't been revitalization since, right?
Speaker 2 Like that's, you could risk the possibility of Lumen, one of the companies that's doing the fiber build out for Microsoft, the stock like fucking forexed last month or this month.
Speaker 2
And then, how's it done from 2002 to 2003? Oh, no, horrible, horrible. But, like, we're going to rip, maybe.
You could rip that mode, maybe. You could breeze AI for another two decades.
Speaker 1 You, sure, sure, possible.
Speaker 2 Or people can see a badass demo from GPT-5, slight release, raise a fuckload of money. It could even be like a Devon-like demo, right? Where it's like complete bullshit, but like, it's fine, right?
Speaker 2 Like, shit, I should.
Speaker 2 No, it's
Speaker 2 i don't really i don't really care um
Speaker 2 you know it it it's it's the capital is gonna flow in right now whether whether they're deflates or not is like an irrelevant concern on the near term because you operate in a world where it is happening and being you know being uh you know what what is the warren buffett quote which is like you can be is i don't even know it's warren buffett you don't know who's who do you don't know who's will be naked until the tide goes out no no no the one about like um the market is delusional far longer than you can remain solvent or something like that That's not Buffett.
Speaker 2
That's not Buffett, yeah, yeah. That's uh, John Maynard Keynes.
Oh, shit, that's that old? Yeah,
Speaker 2 okay, um, okay, so Keynes said it, right? It's like you can be, yeah, so this is the world you're operating in. Like, it doesn't matter, right? Like, what, what exactly happens.
Speaker 2 There will be ebbs and flows, but like, that's the world you're operating in. Um,
Speaker 2 I reckon that if an AI bubble happens, if the AI bubble pops, each one of these CEOs will lose their jobs,
Speaker 2 sure. Or if you don't invest and you lose, it's uh Pascalian wager, and you're uh, that's much worse.
Speaker 2 Across decades, the largest company at the end of each decade, like the largest companies, that list changes a lot. And these companies are the most profitable companies ever.
Speaker 2 Are they going to let that list, are they going to let themselves like lose it? Or are they going to go for it?
Speaker 2 They have one shot, one opportunity, you know, to make themselves into, you know, the whole M ⁇ N song, right?
Speaker 1 I want to hear like the story of how both of you started your businesses or your like the thing you're doing now.
Speaker 1 John, like, how like,
Speaker 1 how did it begin? What were you doing when you started the podcast?
Speaker 2 Oh my god, no way, please, please.
Speaker 1 Wait, no, is he joking?
Speaker 2
I guess if he doesn't want to, we'll talk about it later. Okay, sure.
I think, like, I used to, I mean, the story's famous. I've told it a million times.
Speaker 2
It's like Asianometry started off as a tourist channel. Yeah.
So I would go around kind of like, I was, I moved to Taiwan for work and then doing what I was
Speaker 2 working in cameras. And then, like, I told what was the other company you started?
Speaker 2 It tells too much about me. Oh, come on.
Speaker 2 I told
Speaker 2
I worked in cameras. And then basically, I went to Japan with my mom.
And mom was like, hey, you know,
Speaker 2
like, what are you doing in Taiwan? I don't know what you're doing. I was like, all right, mom, I will go back to Taiwan and I'll make stuff for you.
And I made videos.
Speaker 2 I would like go to the Chiang Kai-shek park and be like, hi, mom, this park was this, this. Eventually at some point, you run out of stuff.
Speaker 2 But then it's like a pretty smooth transition from that into like, you know history of Chinese history Taiwanese history and then people started calling me Chinanometry I didn't like that so I moved to other parts of Asia and now I've like and then so what year did you like start like what year was like people started watching your videos let's say like a thousand views per video or something oh my gosh that was not I started the channel in 2017 and it wasn't until like 2018 that 2019 that it actually I labored on for like three years first three years with like no one watching like I had got like 200 views and I'd be like oh this is great.
Speaker 1 And then, were you, were the videos basically like the ones you have?
Speaker 1 By the way, so sorry, backing up for the audience who might not know, I imagine basically everybody knows Asianometry, but if you don't, like, the most popular channel about semiconductors, Asian business history, business history in general, um,
Speaker 1 uh,
Speaker 1 even like geopolitics history and so forth.
Speaker 1 Uh, and yeah, I mean, it's like, honestly, I've done like research for like different AI guests and different like whatever thing I'm trying to basically, I'm trying to understand like
Speaker 1 how does hardware work? How does AI work?
Speaker 2 it's like this is like my how does a zipper work did you watch that video no I haven't watched that one it was like I think it was a span of three videos it was like Russian oil industry in the 1980s and how it like funded everything and then when it collapsed they were absolutely fucked yeah and then it was like the next video was like the zipper monopoly in Japan the next video was about ASM not a monopoly yeah strong strong holding in a mid in a mid-tier size yeah it's like the luxury zipper makers Asian armature is always just kind of like stuff I'm interested in and I'm like interested in a whole bunch of different stuff and I like
Speaker 2
like, and the channel, for some reasons, people started watching the stuff I do. And I still have no idea why.
To be honest, I still feel like it's, I still feel like a fraud.
Speaker 2 I sit in front of like Dylan and he's, I feel like a fraud, legit fraud, especially when he starts talking about 60,000 wafers and all that.
Speaker 2 I'm just like, I feel like I should be known, I should know this, but like, you know, in the end, it's,
Speaker 2 but, but that, you know, I just try my best to kind of bring interesting stories out.
Speaker 1 How do you make a video every single week? Because these are like two a week.
Speaker 2 You know how long he had a full-time job? Five years, six years, or sorry, a textile business. And yes, and a full-time job.
Speaker 2 Wait, no, full-time job, textile business, and Asianometry until like for a long, long time. Yeah, I literally just gave up the textile business this year.
Speaker 1
And like, how are you doing research and doing like making a video and like twice a week? I don't know. I like do these fucking, I'm like fucking talking.
This is all I do.
Speaker 1 And I like do these like once every two weeks.
Speaker 2
All right. See, see, the difference is, Dwarkesh, you go to SF Bay Area parties constantly.
And Dwarquesh is just, I mean, John is like locked in.
Speaker 2 He's like locked in 24 seconds.
Speaker 1 He's got like the SMC work ethic and I've got like the Intel work ethic.
Speaker 2 I don't, I got the Huawei ethic. If I do not finish this video, my family is, it will be, will be pillaged.
Speaker 2 He actually gets really stressed about it, I think, like not doing something like on his schedule. Yeah.
Speaker 2 Is it very much like, I do, I do two videos per week. I write them both simultaneously.
Speaker 1 And how are you scouting out future topics you want to do research?
Speaker 1 These are just like what, you know, you just like pick up random articles, books, whatever, and then you just, if you find it interesting, you make a video about it.
Speaker 2 Sometimes what I'll do is I'll Google a country and I'll Google an industry and I'll Google
Speaker 2
what a country is exporting now and what it used to export. And I compare that and I say, that's my video.
Or I'll be like, or but then sometimes also just as simple as like,
Speaker 2 I should do a video about YKK.
Speaker 2
And then it's also just, but then it's also just as simple as... Super is nice.
I should do a video about it. I do.
I do. It literally is.
Speaker 1 Do you like keep a list of like,
Speaker 1 here's the next one. Here's the one after that.
Speaker 2 I have a long list of like ideas. Sometimes it's as vague as like
Speaker 2 Japanese whiskey. No idea what Japanese whiskey is about.
Speaker 1 I heard about it before.
Speaker 2 I watched the movie. And then so I was just like, okay, I should do a video about that.
Speaker 2 And then eventually, you know, you get to a, you get to.
Speaker 1 How many research topics do you have in the back burner, basically? Like you're like, I'm kind of reading about it constantly. And then like in a month or so, I'll make a video about it.
Speaker 2
I just finished a video about how IBM lost the PC. Yeah.
So right now I'm D, I'm unstressing about that, but then I'll kind of move right on to like some videos do kind of lead into others.
Speaker 2
Like right now, this one is about IBM PC, how IBM lost the PC. Now it's next is how Compact collapsed, how the wave destroyed Compaq.
So technically,
Speaker 2
I'll do that. At the same time, I'm dual lining a video about qubits.
I'm dual lining a video about
Speaker 2 directed self-assembly for semiconductor manufacturing, which I'll read a lot of Dylan's work for. But
Speaker 2 a lot of that is kind of like, it's just, it's in the back of my head and I'm like producing it as I, as I go.
Speaker 2 Dylan, how do you work?
Speaker 1 How does one go from Reddit shit poster to like running like a semiconductor research and consulting firm? Yes.
Speaker 1 Let's start with the shit posting.
Speaker 2
It's a long line, right? Like, so immigrant parents grew up in rural Georgia. So when I was eight, I begged for, or seven, I begged for an Xbox.
And when I was eight, I got it, 360, right?
Speaker 2 They had a manufacturing defect called the Red Ring of Death.
Speaker 2
There are a variety of fixes. I tried them, like putting a wet towel around the Xbox, something called the penny trick.
Those all didn't work. My Xbox still didn't work.
Speaker 2
My cousin was coming the next weekend and like, you know, he's like two years older than me. I look up to him.
He's like in between my brother and I, but I'm like, oh, no, no, we're friends.
Speaker 2 You know, you don't like my brother as much as you like me. My brother's more like jockey type, so it didn't matter.
Speaker 2
So like he didn't really care that I broke, that the Xbox is broken. He's like, you better fix it though, right? Otherwise parents will be pissed.
So I figure out how to fix it online.
Speaker 2 It ends up, you know, I tried a variety of fixes, ended ended up shorting the temperature sensor.
Speaker 2 And that worked for long enough until Microsoft did the recall. Right.
Speaker 2
But in that, you know, I stayed, I learned how to do it out of necessity on the forms. I was a nerdy kid, so I like games, but whatever.
But then, like, there was no other outlet once.
Speaker 2 I was like, holy shit, this is Pandora's box. Like, what just got opened up? So then I just shit posted on the forms constantly, right?
Speaker 2 And, you know, for many, many years. And then I ended up like moderating all sorts of Reddits when I was like a tween, teenager.
Speaker 2 And then like, you know, as soon as I started making money, you know, you know, grew up in a family business, but didn't get paid for working, right? Of course, like yourself, right?
Speaker 2 But like, as soon as I started making money at like, and like I got my internship and like internships, I was like 18, 19, right? I started making money. I started investing in semiconductors, right?
Speaker 2 Like I was like, of course, this is shit I like, right?
Speaker 2 You know, everything from like,
Speaker 2 and by the way, like the whole way through, like as technology progressed, especially mobile, right? It goes from like very shitty chips and phones to like very advanced.
Speaker 2 Every generation, they'd add something and I'd like read every comment I'd read every technical post about it uh and also all the history around that technology and then like you know who's in the supply chain and it just kept building and building and building went to college did data sciencey type stuff uh went to work on like hurricane earthquake wildfire simulation and stuff for a financial company but before that like but during college i was still like i wasn't shit posting on the internet as much i was still posting some but i was like following the stocks and all these sorts of things the supply chain all the way from like the tool equipment companies uh and the reason i like like those is because because, like, oh, this technology, oh, it's made by them.
Speaker 2 You know, you kind of.
Speaker 1 Do you have like friends in person who were into this shit? Or was it just
Speaker 2 I made friends on the internet? Right.
Speaker 2 Oh, that's dangerous.
Speaker 2 I've only ever had like literally one bad experience, and that was just because he's drugged out, right?
Speaker 1 Like one bad experience online or like meeting someone from the internet in person.
Speaker 2
Everyone else has been genuine. Like you, you have enough filtering before that point.
You're like, you know, even if they're like hyper mega, like autistic, it's cool, right? Like, I am too, right?
Speaker 2 You You know, no, I'm just kidding. But, like, you know, you go through like the,
Speaker 2 you know, the layers and you look at the economic angle, you look at the technical angle, you read a bunch of books just out of like, you know, you can just buy engineering textbooks, right?
Speaker 2 And read them, right? Like, what's, what's, what's stopping you, right? And if you bang your head against the wall, you learn it, right?
Speaker 1 And then why you were doing this, was there like, did you expect to work on this at some point or was it just like pure interest?
Speaker 2 No, it was like, it was like obsessive hobby of many years and it pivoted all around, right?
Speaker 2 Like at some point, I really liked gaming, and then I got moved into, like, I really like phones, and like, rooting them, and like, underclocking them, and the chips there, and like screens and cameras, and then back to like gaming, and then to like data center stuff, like, cause that was like where the most advanced stuff was happening.
Speaker 2 So, it's like, I liked all sorts of like telecom stuff for a little bit, like, it was like, it like bounced all around, but generally in like computing hardware, right?
Speaker 2 Um, and I did data science, you know, you could, I, I
Speaker 2 said I did AI when I interviewed, but like, you know, but it was like bullshit, multivariable regression, whatever, right?
Speaker 2 It was simulations of hurricanes, earthquakes, wildfire for like financial reasons, right? Like, anyways,
Speaker 2 you move, I moved up to like,
Speaker 2 you know, I was still, you know, I worked, I had a job for three years after college, and I was posting and like, whatever. I had a blog, anonymous blog for a long time.
Speaker 2 I'd even made like some YouTube videos and stuff. Most of that stuff is scrubbed off the internet, including internet archive because I've asked them to remove it.
Speaker 2 But like,
Speaker 2 you, you, in 2020, I like quite quit my job and like started shit posting more seriously on the internet. I
Speaker 2 moved out of my apartment and started traveling through the U.S.
Speaker 2 and I went to all the national parks like in my truck slash like tent slash, you know, also stayed in hotels and motels like three, four days a week.
Speaker 2 But I'd like, I started posting more frequently on the internet.
Speaker 2 And I'd already had like some small consulting arrangements in the past, but it really started to pick up in mid-2020, like consulting arrangements from the internet from my persona.
Speaker 2 Like what kinds of people?
Speaker 1 Investors, hardware companies?
Speaker 2 There were like, it was like, it was like people who weren't in hardware that wanted to know about hardware it would be like some investors right some couple VCs did it but some public market folks
Speaker 2 you know there was times where like companies would ask about like three layers up in the stack like me because they saw me write some random posts and like hey like can we and blah blah blah right so there's all sorts of like random it was really small money um
Speaker 2 And then in 2020, like it really picked up and I just like, I was like, why don't I just arbitrarily make the price way higher? And it worked.
Speaker 2 And then I started posting, I made it a new, I made a newsletter as well.
Speaker 2 And I kept posting.
Speaker 2
Quality kept getting better, right? Because people read it. They're like, this is fucking retarded.
Like, you know, here's what's actually right.
Speaker 2 Or, you know, like, you know, over more than a decade, right?
Speaker 2 And then in 2021, towards the end, I made a paid post because someone didn't pay and like, you know, for a report or whatever, right? Ended up, that ended up doing, like, I went to sleep that night.
Speaker 2 It was about, it was about Photoresist and like the developments in that industry, right? Which is the stuff you put on top of the wafer before you put in the litho tool, lithography tool.
Speaker 2 Did great, right? Like, Like, I woke up the next day and I had like 40 paid subscriptions. I was like, what?
Speaker 2 Okay, let's keep going, right? And let's post more paid, paid, sort of like partially free, partially paid.
Speaker 2 Did like all sorts of stuff on like advanced packaging and chips and data center stuff and like AI chips, like all sorts of stuff, right? That I like was interested in and thought was interesting.
Speaker 2 And like, I always bridged economically because I read all the company's earnings for like, you know, since I was 18. I'm 28 now, right?
Speaker 2 You know, all the way through to like, you know, all the technical stuff that I could.
Speaker 2 2022, I also started to just go to every conference I could, right?
Speaker 2 So I go to like 40 conferences a year. Not like, not like trade show type conferences, but like technical conferences, like an arc chip architecture, Photo Resist,
Speaker 2
you know, AI Nerips, right? Like, you know, ICML. How many conferences do you go to a year? Like 40.
So you like live at conferences. Yes.
Yeah.
Speaker 2 I mean, I've been a digital nomad since 2020, and I've basically stopped and I moved to SF now, right? But like kind of, kind of, not really.
Speaker 2
You can't say that. The government, government, the California.
No, no, I'm not even, I don't live at SF, come on, but I basically do now, right? California Internal Revenue Service. No,
Speaker 2 do not joke about this, guys.
Speaker 2 Like, do not seriously joke about this. They're going to send you a clip of this podcast, be like, 40%, please.
Speaker 2 I am in San Francisco, like sub-four months a year, contiguously, and you know,
Speaker 1 exactly 100 and whatever.
Speaker 2 Exactly 179 days. Let's go, right? Like, you know,
Speaker 2 over the full course of the year.
Speaker 2 But no, like, you you know, go to every conference, make connections at all these very technical things, like international electron device manufacturing, oh, lithography and advanced patterning, oh, like very large-scale integration, like, you know,
Speaker 2
all the, you know, circuits conference. You just go every single layer of the stack.
It's so siloed. There's tens of millions of people that work in this industry.
Speaker 2 But if you go to every single one, you try and understand the presentations, you do the required reading, you look at the economics of it, you like, are just curious and want to learn, you like, you can start to build up like more and more and the content got better and like, you know, what I followed got better.
Speaker 2 And then like started hiring people in 2020, in early 2022 as well.
Speaker 2 Or might have been, yeah, yeah, like mid, mid-2022, started hiring, got people in different layers of the stack, but now today, like you fast forward now today, right? Like
Speaker 2 almost every hyperscaler is a customer, not for the newsletter, but for like data we sell, right?
Speaker 2 You know, most, many major semiconductor companies, many investors, right? Like all these people are like customers of the data and stuff we sell.
Speaker 2 And the company has people all the way from like X Simer, X ASML, all the way to like X like Microsoft and like an AI company, right?
Speaker 2 Like, you know, like, and then through the stratification, you know, now there's 14 people here and like the company and like all across the US, Japan, Taiwan, Singapore, France,
Speaker 2 US, of course, right? Like, you know, all over the world and across many ranges of like, and hedge funds as well right, ex-hedge funds as well, right?
Speaker 2
So you got kind of have like this amalgamation of like, you know, tech and finance expertise. And we just do the best work there, I think.
Are you talking about a monstrosity? Like
Speaker 2 an unholy concoction.
Speaker 2 So, so like, and we sell, we sell, you know, we have data analysis, consulting, et cetera, for anyone who like really wants to like get deeper into this, right?
Speaker 2 Like we can talk about like, oh, people are building big data centers, but like, how many chips is being made in every quarter of what kind for each company?
Speaker 2 What are the subcomponents of these chips, what are the subcomponents of the servers, right? We try and track all of that.
Speaker 2 Follow every server manufacturer, every component manufacturer, every cable manufacturer, just like all the way down the stack tool manufacturer and like know how much is being sold where and how and where things are and project out, right?
Speaker 2 All the way out to like, hey, where is every single data center?
Speaker 2 What is the pace that it's being built out?
Speaker 2 This is like the sort of data we want to have and sell. And
Speaker 2 the validation is that hyperscalers purchase it and they like it a lot.
Speaker 2
And And like AI companies do and like semiconductor companies do. So I think that's the sort of like how it got there to where it is is just like try and do the best.
Right. And try and be the best.
Speaker 1 If you were an entrepreneur who's like, I want to get involved in the hardware chain somewhere, like what is like, what is if you, if you could start a business today somewhere in the stack,
Speaker 1 what would you pick?
Speaker 2 John, tell them about your textile business.
Speaker 2 I think I'd work in memory.
Speaker 2 Something in memory. Because I think like if you, if this concept is like there, like you have to hold immense amounts of memory, immense amounts of memory.
Speaker 2
And I think memory already is tapped like technologically. HPM exists because of limitations in DRAM.
I said it correctly. I think like it's
Speaker 2 fundamentally, we've forgotten it because if it's a commodity, but we shouldn't. I think it's breaking memory is going to would change, could change the world in that sense.
Speaker 2 I think the context here is that moore's law was predicted in 1965 intel was founded in 68 and released their first memory chips in 69 and 70.
Speaker 2 um and so moore's law was a lot of it was about memory and the memory industry followed moore's law up until 2012 where it stopped right um and it became very incremental gains since then whereas logic has continued and like people are like oh it's dying it's slowing down at least it's there's still a little bit of like you know for you know coming right you know still more than 10 15 a year cagger right of growth and density slash cost improvement Memory has like literally like been like since 2012, like really bad.
Speaker 2 So, and when you think about the cost of memory, you know, it's been, it's been considered a commodity, but memory integration with accelerators, like this is like something that I don't know if you can be an entrepreneur here, though.
Speaker 2 That's the real challenge is because you have to manufacture at some really absurdly large scale or design something which in an industry that does not allow you to make custom memory devices.
Speaker 2 Or use materials that don't work that way. So there's a lot of like work there that I don't, so I don't necessarily agree with you, but I do agree.
Speaker 2 It's like one of the most important things for people to invest in.
Speaker 2 You know, I think there's, it's, it's really about where is your, where are you good at and where can you vibe and where can you like enjoy your work and be productive in society, right?
Speaker 2 Because there are a thousand different layers of the abstraction stack. Where can you make it more efficient?
Speaker 2 Where can you use, utilize AI to build better and make everything more efficient in the world and produce more bounty and like iterate feedback loop, right?
Speaker 2 And there is more opportunity to today than any other time in human history, in my view, right? And so, like, just go out there and try, right? Like, what in what engages you?
Speaker 2 Because if you're interested in it, you'll work harder, right? If you like, have a passion for copper wires. I promise to God, if you make the best copper wires, you'll make a shitload of money.
Speaker 2 And if you have a passion for like B2B SAS, I promise to God, you'll make fuckloads of money, right? I don't, I don't like B2B SAS, but whatever, right? It's like, whatever.
Speaker 2 You know, whatever you have a passion for, like, just work your ass off, try and innovate, bring AI into it
Speaker 2 and let it, you try and use AI yourself to like make yourself more efficient and make everything more efficient. And I promise you will like be successful, right?
Speaker 2 I think that's really the view is not necessarily that there's one specific spot because every layer of the supply chain has, you go, you go to the conference there, you go to talk to the experts there.
Speaker 2
It's like, dude, this is the stuff that's breaking and we could innovate in this way. Or like these fiber extraction layers, we could innovate this way.
Yeah, do it.
Speaker 2 There's so many layers where this is, we're not at the Pareto optimal, right? Like there's there's so much more to go in terms of innovation and inefficiency.
Speaker 1 All right, I think that's a great place to close. Um, Dylan, uh,
Speaker 1 John, thank you so much for coming on the podcast. I'll just give people another reminder: Dylan Patel, semianalysis.com.
Speaker 1 That's where you can find the technical breakdowns that we've been discussing today, Asianometry, uh, YouTube channel.
Speaker 1 Um, everybody will already be aware of Asianometry, but, anyways, um, thanks so much for doing this. It was a lot of fun.
Speaker 2 Thank you. Yeah, thank you.