Chips, Neoclouds, and the Quest for AI Dominance with SemiAnalysis Founder and CEO Dylan Patel

47m
What would it take to challenge Nvidia? SemiAnalysis Founder and CEO Dylan Patel joins Sarah Guo to answer this and other topical questions around the current state of AI infrastructure. Together, they explore why Dylan loves Android products, predictions around OpenAI’s open source model, and what the landscape of neoclouds looks like. They also discuss Dylan’s thoughts on bottlenecks for expanding AI infrastructure and exporting American AI technologies. Plus, we find out what question Dylan would ask Mark Zuckerberg.

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @dylan522p | @SemiAnalysis_

Chapters:

00:00 – Dylan Patel Introduction

00:31 – Dylan’s Love for Android Products

02:10 – Predictions About OpenAI’s Open Source Model

06:50 – Implications of an American Open Source Model for the Application Ecosystem

10:48 – Evolution of Neoclouds

17:26 – What It Would Take to Challenge Nvidia

27:43 – What Would an Nvidia Challenger Look Like?

28:18 – Understanding Operational and Power Constraints for Data Centers

34:48 – Dylan’s View on the American Stack

43:01 – What Dylan Would Ask Mark Zuckerberg

44:22 – Poker and AI Entrepreneurship

46:51 – Conclusion

Press play and read along

Runtime: 47m

Transcript

Speaker 2 Hi listeners, welcome back to No Priors. Today I'm here with Dylan Patel, the chief analyst at Semi-Analysis, a leading source for anyone interested in chips and AI infrastructure.

Speaker 2 We talk about open source models, the bottlenecks to building a data center the size of Manhattan, geopolitics, and poker as a tele for entrepreneurship. Welcome, Dylan.

Speaker 2 Dylan, thank you so much for being here.

Speaker 1 Thank you for having me.

Speaker 2 I've been really looking forward to this conversation. You're such a deep thinker about this space.
And then also, it's very odd. You clearly have the Samsung watch.

Speaker 1 Yeah, I got the phone. You got the blue.
You got the laptop. The fold, yeah, yeah.
Tell me more.

Speaker 1 So part of the store origin story is that I was moderating forms when I was a child and my dad's first Android phone was the droid, right? Okay. And for some reason, I was obsessed with like...

Speaker 1 messing with it, like rooting it, like underclocking it, improving the battery life, all these things, because when we're on a road trip, there's nothing to do besides like mess around on his phone.

Speaker 1 So I posted so much about Android that I became a moderator of slash r/slash Android on Reddit and like many other subreddits related to hardware and NVIDIA and Intel and all this stuff.

Speaker 1 But because of that, I've just always had Android.

Speaker 1 Now I've had work iPhones before, but I just really love Android and that it's like, if you're going to like technology, I'm not like someone who pushes it, but like get the best stuff.

Speaker 1 So I have like the ultra Samsung watch, which I think looks cool, and the foldy phone, right? It's fun. It's obviously different and weird.
No, no iMessage is attractive. What does it dominate at?

Speaker 1 What is it better at um besides the openness of like the hackability i don't even hack that much stuff anymore right it's like what do you use your phone for i think i think the main thing is like you can have like slack and an email up on two different parts of your phone i think that's probably the main thing or like you can actually use like a spreadsheet on a folding phone you cannot use a spreadsheet on a regular phone okay and that's not even an android thing like apple's folding phone next year will be able to do that just fine and i'll have no argument then yeah but i just like it you know people people have their preferences people are creatures of habit.

Speaker 2 You got to look at the GPU purchasing forecast on a sheet on your phone.

Speaker 1 Yes, I do. I do.
No. It's like someone's telling you numbers.
You're like, wait, this is like slightly different than my number, right?

Speaker 2 Okay, so we have a week of big rumored announcements coming up. Tell me your reaction to the Open AI open source model.

Speaker 1 In theory, it's going to be amazing, right? Like, I assume this is releasing after? it's released or yes so that's okay the open source model is amazing guys like

Speaker 1 i think the world is going to be really like shocked and excited. It's the first time America's had the best open source model in six months, nine months, a year.

Speaker 1 Llama 3.1 405b was the last time we had the best model. And then Mishral took over for a little bit, if I recall correctly.

Speaker 1 And then the Chinese labs have been dominating for the last like six, nine months. Right.
So it'll be interesting. It'll also be funny because like.

Speaker 1 The open source model probably won't be the best for just regular chat because it is like more reasoning focused and all these things but it'll be really good at code and so I'm excited for that yeah like tool use although that's like going to be confusing like how do you use the tools if you don't have access to open AI's tool use stuff but the model is trained to do so that'll be interesting for people to figure out I think the last thing is like the way they're rolling it out is really interesting they accidentally leaked all the weights but no one in the open source has figured out how to actually run inference audit because there's just some weird stuff in the model with the architecture like 4-bit and like the biases and all this other stuff.

Speaker 1 But what's interesting is other companies drop the model weights and say, go make your own inference implementation.

Speaker 1 But OpenAI is like actually like dropping the model weights and like all these custom kernels for people to implement in inference. So everyone has a very optimized inference stack day one.

Speaker 2 And they work with partners on it too.

Speaker 1 Yeah, working with partners on this.

Speaker 1 But this is very interesting because like when DeepSeeks drops, it's like, well, together in Fireworks, they're like, yeah, we're the best at inference because we have all these like people who are really good at low-level coding, whether it be like fireworks with all their like former PyTorch meta people or together with like, you know, TreeDAO and all the, you know, Danfu and all these like super cracked like kernel people, they have like higher performance, right?

Speaker 1 But in this case, like OpenAI is releasing a lot of this stuff. So it's, it's, it's, it's interesting for the inference providers too.
Like, how do they differentiate now?

Speaker 2 Yeah. I mean, my premise on this is in the end, a lot of the model optimization performance layer is open source and it's a commodity.

Speaker 2 And it will end up being like a fight at the infrastructure level, actually.

Speaker 2 And so, you know, all of these inference providers, like as you mentioned, you know, fireworks and together and base 10 and such, they compete on both dimensions.

Speaker 2 And the question is, what's going to matter in the long term?

Speaker 1 Why would these model-level software optimizations all be open? They haven't been open so far and the advancements are so fast, right? Like, well, I think

Speaker 2 a bunch of them have been partially open. And I think OpenAI is also pushing for them to be open as well, right?

Speaker 2 And so I think there's a lot of force in the ecosystem to open source from both like the NVIDIA level up and from the model providers down.

Speaker 1 Agree.

Speaker 2 And so I think today these providers all fight on that dimension. Yeah.
And they also fight on the infrastructure dimension.

Speaker 2 And I think infrastructure is going to end up being a bigger differentiator.

Speaker 1 That makes sense.

Speaker 2 You can't open source your actual infrastructure, right?

Speaker 1 You just have to have the network and you have to run it. Right.
Yeah, yeah. That makes a lot of sense.
Although, like, I see today the inference like providers have such a wide variance, right?

Speaker 1 Like the ones you mentioned are on the like the leading edge, especially like Together and Fireworks, I think are on the leading edge of like their own custom stacks, all the way down to like, there's a lot of people who just take the out-of-box open source software.

Speaker 2 Yeah, I think there's no market for that.

Speaker 1 But those guys have just, yeah, I agree. There's no market.
It's like commoditized. Yeah.
They have really,

Speaker 1 really way worse margins than the people who are very optimized.

Speaker 1 And you see NVIDIA trying to open source all this stuff around Dynamo and OpenAI and all these other people are trying to open source stuff, but the level of optimizations is also like really, really large, like caching between turns and caching tool use calls and all these other things.

Speaker 1 And it's not just like a single server problem. Like it's like the deep seek implementation of inference is like 160 GPUs or something like that.
Like that's over $10 million of hardware.

Speaker 1 And then that's just one replica. And then you have a lot of replicas and you share the caching servers between them.
So like seems like.

Speaker 1 Just the orchestration of that, but also the infrastructure of that. It's a very large amount of infrastructure.
I don't know.

Speaker 1 That's interesting thought that that would be completely commoditized optimization layer.

Speaker 2 Well, I think that there's optimization at the single node level, and then there's like the system software where you can like orchestrate this.

Speaker 2 And I think that owning the abstractions for it and having people use your tools and more sophisticated teams to do that optimization is like a very ugly distributed systems problem.

Speaker 2 I think that will matter.

Speaker 1 Okay. So I could agree with that.
I could agree with that. Single node is not necessarily, yeah, I agree.

Speaker 2 Let's move

Speaker 2 out and a layer down. Like, what does having access to an American open source model mean or just more and more powerful like open source AI models mean for the application ecosystem?

Speaker 1 I mean, I know like a lot of people and some enterprises are really iffy about like using like the best open source model. They're like worried.
It's like there's nothing wrong with them today.

Speaker 1 There's nothing in them today, right? You know, there's the worry that one day they

Speaker 1 check. I mean, you don't, like, you can just vibes it out.
Like they're like competing with each other to just release as fast as possible, right?

Speaker 1 Like DeepSeek and Moonshot and all these other, you know, Alibaba, et cetera. Like they're competing to release as fast as they can with each other.

Speaker 1 The Alibaba teams in Singapore are like, I don't think that they're like putting Trojan horses in these models. Right.

Speaker 1 And like there's some interesting papers that Anthropic did on like, you know, trying to embed some stuff in models. It ended up like being detectable pretty easily.

Speaker 1 Again, like, I don't know how to, you know, I'm not, I'm not too much into that space of interpretability and like evals, but I just don't think that they are, right? It's just a vibes thing.

Speaker 1 But some people are worried that they could be or they're just like iffy, like, oh, I don't want to use a Chinese model.

Speaker 1 It's like, well, fine, but now you're going to go use a service that is is backed by a Chinese model, which is fine. Like, you know, like, but they're fine with that.

Speaker 1 They just don't want to directly use the model. I don't know.

Speaker 1 I think, I think it's interesting for some enterprises who are still stuck on Mama, but it's mostly just really interesting because it continues to move the commodity bar up.

Speaker 1 Now, with this tier being open source, and sure, like, probably won't be like drastically better than Kimmy, but Kimmy is so big, it's so difficult to run, like, people aren't running it.

Speaker 1 Whereas the OpenAI model is like relatively small, so you can run it without being like gigabrained at infrastructure. You end up with that commoditizing so much more of the closed source API market.

Speaker 1 And I think that's just going to be great for adoption, right?

Speaker 2 Like, yeah, one of my hopes is

Speaker 2 for our companies that are doing more with reasoning, it is like they're still blocked on cost and latency.

Speaker 1 So this is something that I've found very interesting is that we've been trying to build a lot of alternative data sources for token usage.

Speaker 1 Who's using what tokens, what models, where, et cetera, why. And it's very clear that people aren't actually using the reasoning models that much in API.

Speaker 1 Like Anthropic has eclipsed OpenAI and API revenue, and their API revenue is primarily not thinking. It's cloud for, but it's not in the thinking mode.

Speaker 1 You know, code being the biggest use case that's skyrocketing.

Speaker 1 And the same applies to like OpenAI and DeepMind

Speaker 1 from what we see, querying big users and other ways of like scraping alternative data because of the latency issues, because the cost issues, especially, right? The cost is just ridiculous.

Speaker 2 Exactly. So I guess my view is you're not allowed to have a tech podcast without saying the words Jevins paradox now.
And

Speaker 2 I think the behavior is going to be like we see a lot of people use reasoning because it's so much cheaper to run if you take out. a big piece of the margin layer and you make it smaller.

Speaker 2 And so I think like we have a lot of companies that are at scale who are using it, but it's so expensive that they restrain themselves.

Speaker 1 For a long time, OpenAI was charging more per token for the reasoning model, right, 01 and 03,

Speaker 1 than they were for GPT-4.0, even though the architecture is like basically the same. It's just the weights are different.

Speaker 1 And there's like some reason for it to be a little bit more expensive per token because the context length is on average longer.

Speaker 1 But in general, it made no sense for it to be like, was it like 4x the cost per token? That didn't make any sense. And then finally, they like cut it.

Speaker 1 But for a long time, not only was it like way more tokens outputted, it was also a way higher price per token, even though they were just taking that as margin. Because they could, right?

Speaker 2 Because they had the only thing out there.

Speaker 1 Yeah. And then, you know, DeepSeek dropped and Anthropic and Google and others started releasing models and it like, you know, commoditized quite a bit.

Speaker 1 But this is going to just like kneecap, like, like take, cut everyone off at the hip, right?

Speaker 1 And bring margins down again. So that.

Speaker 2 Who has an API business, you mean?

Speaker 1 Yeah, yeah. For API for models that aren't like, like, super leading edge.

Speaker 2 What do you think evolves in the sort of neo-cloud layer over time?

Speaker 1 It's funny. Every day we still find a new neo-cloud.
Like we have like 200 now. And still every day we find new ones, right?

Speaker 2 Like should they all exist?

Speaker 1 Obviously not, right? Like, so, so, to some extent, it depends on what the neo cloud business is, right? Like, today, it's there is quite a bit of differentiation between the neo clouds.

Speaker 1 It's not just like buy a GPU, put it in a data center.

Speaker 1 Otherwise, you wouldn't have some neoclouds with like horrible utilization rate, and you wouldn't have some neoclouds who are like completely sold out on four, five, six-year contracts, right?

Speaker 1 Like, Cordova, for example, who doesn't even quote most startups, or they just give them a stupid quote because they're just like, I don't want your business, or like they want a long-term contract, right?

Speaker 1 Which a lot of people don't want to sign.

Speaker 1 And so, like, there's quite a bit of differentiation in in financial performance of these neo clouds time to deploy reliability the software they're putting on top right like many of them can't even install slurm for you it's like what are you doing like and you should have some sort of like so very low-level hardware management yeah yeah it's like very and it's like to some extent from the investor side we see a lot more debt and equity flowing in from the commercial real estate folks as commercial real estate has been really poor over the last couple years few years they've been starting to pour money into cloud space and obviously their return profile is quite different because it's like a short-lived asset versus like a longer-lived asset.

Speaker 1 But at the end of the day, like these companies, they're okay with a 10, 15% return on equity, right? And over time, that falling. That is not okay for venture capital, right?

Speaker 1 And yet a lot of these neo clouds are backed by venture capital.

Speaker 1 So a lot of these companies will fail either because it no longer makes sense for them to continue to get venture funding or they end up getting out competed because they just can't get their utilization up unlike, you know, some other clouds, right?

Speaker 1 Like

Speaker 1 like the Core Weaves and Crusos and such of the world, right? So there's sort of like a rock in a hard place for 100 of these neo clouds.

Speaker 1 And there's many of them who are like, oh no, I purchased these GPUs, I have a loan, and it costs me this much. And because my utilization is here, I'm like burning cash, right?

Speaker 1 And they should at the very least not be burning cash, right? And so some of them are like, you know, they're desperate to sell the remaining GPUs.

Speaker 1 So they go out to like, you know, companies and give them insanely low deals.

Speaker 1 There's some startups who I really commend because they've like really figured out how to get the desperate neo clouds to give them GPUs.

Speaker 1 But those neo clouds are going to go bankrupt at some point because their cash flow is worse than their debt payment. But at the end of the day, like there's going to be a lot of consolidation.

Speaker 1 There is going to be differentiation, right? There's a lot of software today, but we have this like thing called Cluster Max where we review all the neo clouds and major clouds. And it's like.

Speaker 1 like actually some of these neo clouds are better than amazon and google and microsoft in terms of software in terms of um uptime and availability or however you would yeah uptime availability

Speaker 1 reliability network performance like there's just a variety of things that they don't have all the old baggage but the vast majority are worse and we we measure across like you know a bunch of different metrics including the ones I mentioned and security and so on and so forth but our vision of like cluster max is that it starts at like a really low stage today which is like Does the cloud work and how long does it take the user to like get a workload running?

Speaker 1 Because you have Slurm installed or you have K8 installed and

Speaker 1 your network performance is good and your reliability is good and it's secure, right? Like, these are like table stakes.

Speaker 1 Like, what we consider gold or platinum tier today will be just like table stakes in like, you know, six months, a year, a couple of years. There'll be a whole layer of like software on top.

Speaker 1 And then it's like, do NeoClouds build this software, right? And some of them are, right? Like together, Nebius

Speaker 1 are offering inference services on top, right? So they're saying, hey, we actually want to provide an API endpoint, not just rent GPUs.

Speaker 1 And Core, we've rumored by the information to be attempting to to buy fireworks for the same reason, right?

Speaker 1 Like, do you move up or do you just slide down into like, I'm making commercial real estate returns? Or you have to go crazy, right?

Speaker 1 Like Crusoe is like, we're going to build gigawatt data centers, right? Like, okay, there's no competition there. There's like a few companies doing that, right? So it's very different.

Speaker 1 So you either have to go like really, really big or you need to move into the software layer. Or you just make commercial real estate or you go bankrupt, right?

Speaker 1 Like these are the paths for all neoclouds, I think.

Speaker 2 I really have to believe there's a reason for being for these companies.

Speaker 2 And my like simple framework for it is, I think the software layer is really hard for people coming from this operation to try and build. There's actually a lot of very specialized software.

Speaker 2 So I think people will buy or partner into it.

Speaker 2 But if you think about other inputs, it could be like, I'm very good at finding and controlling power agreements.

Speaker 2 It could be like, I build at a scale other people are incapable of doing so.

Speaker 1 Yeah, yeah, which is, which is like sort of what like or like NVIDIA wants me to exist. Right.

Speaker 2 I can't like think of like a lot of arguments beyond that. And so I would agree with you.
Eventually, we're going to see consolidation

Speaker 2 either in this layer or commoditization by the inference providers.

Speaker 1 But in the meantime, there is a lot of lunch to eat from Amazon, who continues to charge,

Speaker 1 and Google and Microsoft, who continue to charge absurd margins for their compute because they're just used to doing that in the CPU world. Yeah.
Right.

Speaker 1 And so, like, their ROIC is like extremely high on CPU and storage. And to assume that it can translate over to GPUs is a bit of a fallacy,

Speaker 1 which is why a lot of these companies are moving in. And it's like, okay, in standard cloud, there's a lot more software that people can't just build out of nowhere.

Speaker 1 Yes, EC2 is a product that is pretty simple, but block storage and all these other things are actually quite difficult to do at scale well. like that Amazon does.

Speaker 1 And that's what makes them able to charge this absurd margin on standard compute.

Speaker 1 But now, like, it's like, well, the cloud doesn't actually generate, create any software that the user, end user actually uses, right?

Speaker 1 It's like, sure, I need server community ideas, but then I'm just using PyTorch, which is open source, and I'm using a bunch of NVIDIA software, maybe, or which is open source, or I'm using a bunch of open source models.

Speaker 1 I'm using, you know, VLM and SD Lang, which are open source.

Speaker 1 It's like, you just go down the list, it's like, there's actually no software that the cloud can provide to deserve the margins that Amazon and Google's clouds do have today.

Speaker 1 If you're just an infrastructure provider.

Speaker 2 I think that there is software that the cloud can provide,

Speaker 2 but the major clouds have not delivered the software.

Speaker 1 Agree, agree. Okay, same, same thing.
Because it's really hard to do this stuff, right?

Speaker 1 Like, there is no reason that every single startup needs to have like multiple people dedicated to infra and like figuring out how to run models.

Speaker 1 And like their SLA, their reliability is just so low, right?

Speaker 1 Like so many random SaaS providers that are AI, they have GPUs, they have open source models, it works great, except sometimes it fails and then it's down for eight hours. And it's like, why?

Speaker 1 This shouldn't be a problem. It should be something you should just be able to pay away.

Speaker 2 I mean, I feel like the multi-trillion dollar question that you have thought about for perhaps longer than almost anyone else is like, what does it take to actually challenge NVIDIA?

Speaker 2 You know, asking for a friend, what would it take?

Speaker 1 The like, you know, simple way to put it is like, it's a three-headed dragon, right? Like you have, you have, they're actually just really, really good at, you know, engineering hardware and GPUs.

Speaker 1 Like that is difficult. They're really, really good at networking.
And then they're really.

Speaker 1 I would actually say they're like okay at software, but everyone else is just terrible. No one else is even close on software.

Speaker 1 But, you know, and I guess in that argument, you could say they're great at software, but like actually like, you know, installing NVIDIA drivers is not always easy, right?

Speaker 2 Well, there's great and there's also just like, well, there's like 20 years plus of work in the ecosystem, right?

Speaker 1 Yeah.

Speaker 2 I think today's capability and like usability and there's just like mass of like libraries.

Speaker 1 Yeah. So I think NVIDIA is really hard to take down because of those three reasons.
And it's like, okay, as a hardware provider, can I do the same thing as NVIDIA and Win?

Speaker 1 No, they're an execution machine and they have these three different pillars, right? Sure, they have a lot of margin, but like you have to do something different, right?

Speaker 1 In the case of the hyperscalers, right? Google, Amazon with TPUs, Amazon with Tranium, Meta with MTIA, they are making a bet of, I can actually do something pretty similar to NVIDIA.

Speaker 1 If you squint your eyes now, like Blackwell and TPU is starting like the NVIDIA architecture with TPU architectures are actually converging, like same memory hierarchies and similar sizes of systolic arrays.

Speaker 1 Like it's actually not that different anymore. It's still quite different, right? But hand wave view, it's like pretty similar.
And Tranium and TPUs are very similar.

Speaker 1 architecturally the hyperscalers are not doing anything crazy but that's okay because they can just like do the mass the margin game that's fine but for a chip company to try and compete they must do something very unique now if you do something unique it's like okay all your energy is focused on that one unique thing but on every other vector you're going to be worse like are you going to be there at the latest process note as fast as nvidia no okay that's like 20 30 right on cost slash performance and power right are you going to be on the latest memory technology as fast as nvidia no you'll be like a year behind great same same penalty are you gonna be the same on networking?

Speaker 1 No. Okay.
You know, you just stack all these penalties up. It's like, oh, wait, your unique thing can't just be like two to four X faster.
It has to be like way faster.

Speaker 1 But then the problem is, if you really look at it simplistically, right? Like a flop is a flop, right?

Speaker 1 Again, like this is super simple, but like there is not 10x you can get out of doing a standard von Neumann architecture on efficiency of compute.

Speaker 1 In which case, do all of these things that NVIDIA will engineer better than you because they have a team of 50 people working on you know just memory controllers and hbm and just like act and networking or actually like thousands of people working on networking but like each of these things do they just cut you by a thousand and that's like oh actually what would have been 5x faster is now only like 2x faster plus if i like misstep i'm like six months behind and now the new chip is there right and you're screwed so or or supply chain or like intrinsic like challenges with okay getting other people to deploy it now or rack deployments there's all these supply chain challenges right like literally in amazon's most recent earnings they said they're like chip architecture is not aggressive.

Speaker 1 Their rack architecture is very simple. It's not that aggressive.

Speaker 1 They're like, yeah, we have rack integration yield issues, which is why we've had, which they like blamed their miss on AWS for their trading them not coming online fast enough because of rack integration issues.

Speaker 1 And when you look at the architecture, like we have an article on it, it's like, it's not like that crazy. Like, it's like what Google was doing like four or five years ago, right?

Speaker 1 It's like, oh, wait, supply chain is hard. And Amazon couldn't get everything in supply chain to work.
And so therefore they missed their AWS revenue by a few percent, right?

Speaker 1 Which caused the whole stock market to freak out. But it's like, there's so many things that could go wrong in hardware and the time scales are so long.

Speaker 1 And then the last thing is that like model architecture is not stagnant. If it was, NVIDIA would optimize for it.
But model architecture and hardware, right?

Speaker 1 Software, hardware co-design is the thing that matters, right? And these two things. You can't just like look at one in individual, right?

Speaker 1 Like there's a reason why Microsoft's hardware programs suck, right? Because they don't understand models at all, right?

Speaker 1 Meta, Meta, their chips actually work for recommendation systems and they're deployed for recommendation systems because they can do hardware, software, co-design.

Speaker 1 Google is awesome because they do hardware, software, co-design.

Speaker 1 Why is AMD not catching up despite being awesome at hardware engineering? Well, yeah, they're bad at networking, but also they suck at software and they can't do hardware, software, co-design.

Speaker 1 You know, there's like much deeper reasons why you can get into this, but you have to understand the hardware and the software, and they move in lockstep. And whatever your optimization is,

Speaker 1 doesn't end up working, right? So, one example is all of the first wave AI company, AI hardware companies, right? Cerebris, Grok, Samba Nova,

Speaker 1 GraphCore, all of them made a very similar bet. No, they were very different, right?

Speaker 2 Some of these are architecturally pretty weird and relatively.

Speaker 1 Right. They're architecturally pretty weird, but they made the same bet on memory versus compute, right? We're going to have more on-chip memory and lower bandwidth, right? Off-chip, right?

Speaker 1 Because that was the trade-off they decided to make. So all of them had way more on-chip memory than NVIDIA, right? NVIDIA,

Speaker 1 their on-chip memory has not really grown much from A100, H100, Blackwell, right? It's up 30% in like three generations. Whereas these guys had like 10x the on-chip memory, right?

Speaker 1 All the way back in like when they were competing with A100 or even the generation before.

Speaker 1 But that ended up being a problem because they were like, oh, yeah, we could just run the model on the chip, right? We can put the whole weight, all the weights on there.

Speaker 1 And then, you know, we'll be so much more efficient. And then the model just got way too big, right? Yeah.
And Cerebrus was like, oh, wait, but our chip is huge. Yeah.

Speaker 1 Oh, wait, but still the model's way too big to fit on it. This is like very simple, right? You know, the same thing's happening in the other direction, right?

Speaker 1 Like some companies are like, oh, we're going to make our systolic array, your compute compute unit super, super, super large because let's say Llama 70B is an 8K hidden dimension and your batch and all that.

Speaker 1 Like it's, it's a pretty large matmole. Oh, great.
Okay, we'll make this chip. But then all of a sudden, all the models get super, super sparse MOEs, right?

Speaker 1 Like the hidden dimension of DeepSeeks models are like really tiny because they have a lot of experts, right? Instead of one large MATML, it's a bunch of small ones. You do route, right?

Speaker 1 Like, and all of a sudden, like, if I made a really, really large hardware unit, but I have all these small experts, how am I going to run it efficiently?

Speaker 1 You know, I no one they didn't really predict that the hardware would go that way, but then it ended up going that way.

Speaker 1 And this is like, this is actually the case with at least two of the AI hardware companies today. I don't want to, I don't want to shut them just because you know, it's a let's be friendly.

Speaker 1 Uh, but like, this is like

Speaker 1 clearly like what's happening, right?

Speaker 1 So, it's like you can make a decision, it's a hardware bet that will actually be way better on today's architectures, but then architecture evolves and the generality of like NVIDIA's GPUs or even like TPUs and Tranium is like more general than like as an architecture, but then it doesn't beat NVIDIA by that much, right?

Speaker 1 In which case, they're just going to destroy you with their six months or a year ahead on every technology because they have more people working on it and their supply chain is better, right? So

Speaker 1 it's kind of really tough to make the architecture bet, have the models not just go in a different direction that no one predicted because no one knows where models are headed, right?

Speaker 1 Even like, you know, you can get Greg Brockman and he might like have like a good idea, but like, I'm sure he doesn't even know where models will look like in two years.

Speaker 1 So there's got to be a level of generality and it's hard to like hit that intersection properly. And so, I'm very hopeful people compete with NVIDIA.

Speaker 1 I think it'll be a lot more fun. There'd be a lot less margin eaten up by the infra.
There'd just be a lot more deployment of AI potentially if someone was able to compete with NVIDIA effectively.

Speaker 1 But NVIDIA charges a lot of money because they're the best. And, like, if there was something better, people would use it, but there isn't.
And it's just really hard to be better than them.

Speaker 2 I mean, you had to give the first-gen AI hardware company some credit because they like made a secular, correct decision about the workload.

Speaker 2 But then the architectural decisions like ended up being hard to predict correctly, right? Then you have the cycle of NVIDIA innovation, which is really hard to compete with.

Speaker 2 Both hardware and also, as you said, supply chain issues.

Speaker 1 Even just putting together servers is hard.

Speaker 2 Yes. I think the thing that you point out that like people oversimplified was with maybe a current generation of AI chip startups.
They're like, we're betting on transformers.

Speaker 2 And it's a lot more complicated than that in terms of workload at scale and continued evolution and model architecture.

Speaker 2 And it's also not exposed so that if you're not working with the SOTA labs at like from the beginning, and then you can't make predictions because nobody can make a lot of predictions right now, it's very hard to like say, I'm going to be better at the workload two years from now.

Speaker 2 in a like a very comfortable way

Speaker 1 with no other changes happening.

Speaker 2 Like I can't make that bet right now.

Speaker 1 Yeah. And it's like, one of the interesting things about OpenAI's open source models, it's like all their training pipelines, pipelines, but on a quite boring architecture, right?

Speaker 1 Like it's not their crazy, like cool architecture advantages that they have in their closed source models, which are make it better for long contacts or more efficient KV cache or all these other things, right?

Speaker 1 They're doing it on a standard model architecture that's publicly available.

Speaker 1 They like intentionally made the decision to open source a model with a boring architecture that's pretty much open source, right?

Speaker 1 Already, like people have already done all these things and kept all the secrets internal that they wanted to keep. And it's like, what's what's in there, right?

Speaker 1 Are they even doing standard scaled dot product attention? Probably, but like, there's probably a lot of weird things they're doing, which don't map directly to hardware.

Speaker 1 And like you mentioned, right, like transformer chip architecture is like, there's a lot more complicated here than just like, oh, it's optimized for transformers because like, so is an NVIDIA chip and a TPU, and their next generation is more optimized for it.

Speaker 1 Like, they take steps towards it. They don't leap, but as long as they're like close enough to where you are architecturally optimized for workload, they'll beat you because of all the other reasons.

Speaker 2 And I think your description of like how might a like a chip startup win or any vendor win by specializing, like that actually is really hard in this era.

Speaker 2 Like generalization may continue to win to a degree.

Speaker 1 And it happened with all the edge hardware companies too. You know, we talk about the first-gen AI hardware companies for data center.
There were a handful, but for the edge, there were like 40, 50.

Speaker 1 And like none of them are winning because it turns out The edge is just take a Qualcomm chip or an Intel chip that's made for PC or smartphone and deploy it on the the edge, right?

Speaker 1 Like that ended up being way more meaningful. So it ends up being like the incumbents they can take steps towards what you're going for.

Speaker 1 And if you didn't execute perfectly, or if the models didn't change their architecture away from what you thought it would be, you end up failing.

Speaker 2 If you had to make a bet that something becomes competitive, what is the configuration or company type that does that?

Speaker 1 I don't want to show any company that I've invested in or anything like that. And so therefore.
Not enough advice. No, no, no.

Speaker 1 Like, I'd like, I would just say, like, I probably think that AMD GPUs or Amazon's Tranium will be probably more likely to be a best second choice for people or Google TPU, of course, but I think Google's just more interested in it for internal workloads.

Speaker 1 I just think that those will be much more likely options to succeed than a chip hardware startup. Yeah.
But I mean, I really hope they do because there's some really cool stuff they're doing.

Speaker 2 If we zoom out to

Speaker 2 the macro and we think about just the scale of hardware and data center deployment for these workloads, people talk a lot about the operational constraint on building data centers of this size, the power constraints.

Speaker 2 I think in particular on the power side, it's very interesting how that practically shows up. Is it generation at scale, at cost? Is it grid issues?

Speaker 2 Like, how should more people in technology understand this?

Speaker 1 Yeah, so supply chain is always like fun because like people want to point at one thing as the issue, but it always ends up being these things are so complicated.

Speaker 1 Like if one thing was solved, you could increase production another 20%, and then something else would be the issue.

Speaker 2 You think it's a multi-bottleneck?

Speaker 1 Yeah, or like, hey, for company A, it's actually because their supply chain is this, this is the issue. And for company B, it's this is the issue.
But, you know, that's sort of in generalities.

Speaker 1 But like, I think, zooming out, right, like Noah, Noah's opinion, like, he had a really fun blog about, like, is this AI hardware build out going to cause a recession?

Speaker 1 I think it's actually funny because you can flip the statement and be like, actually, the U.S. economy would not be growing that much this year if it weren't for all the AI buildouts.

Speaker 1 And as a result, data center infrastructure. As a result, electricians' wages have soared.

Speaker 1 As a result, power deployments and other capital investments, which have 15, 30 year lifespans are being made. And all of this CapEx is in turn actually growing the economy.

Speaker 1 And like, actually, maybe the economy wouldn't even be growing much or at all if it weren't for all of these investments.

Speaker 2 One thing that is perhaps looked over from the White House AI action plan was the view of like, we're going to build these AI data centers in the United States.

Speaker 2 We're actually going to need like a lot of general investment beyond the GPUs and the power, which are everybody's first two items into like labor, for example, right?

Speaker 2 So if you just, you know, for simplicity's sake, be like, it's the size of Manhattan and we have to run it. And it's a new system with changing topology and like.

Speaker 2 very high degree of relatively novel hardware with failure. Yeah.

Speaker 2 And like lots of networking, then I'm like, like kind of feels like we need to have a bunch of new capacity, like from a labor or robotics.

Speaker 1 In like 23, it was like very simple it's like nvidia can't make enough chips okay why can't nvidia make enough chips oh coas right chip on wavefront substrate packaging technology and i was like oh hbm right like those were like it was like very simple 23 24 like yeah all these tools involved in that supply chain it was great but then it like very quickly became much more murky right then i was like oh data centers are the issue oh okay we'll just build a lot of data centers oh wait substation equipment and transformers are the issue oh wait power generation is the issue it's not like the other issues went away, right?

Speaker 1 Like actually, you know, COAS

Speaker 1 is still a bottleneck and HBM is still a bottleneck. Optical transceivers are still a bottleneck, but so is power generation and data center physical real estate, right?

Speaker 1 Like I mentioned like Meta is literally building these like temporary like tent structures to put GPUs in because building the building takes too long and it takes too much labor, right?

Speaker 1 As you mentioned, labor, right? That's like one way they were able to remove a part of a constraint.

Speaker 1 They're still constrained on power and they had to delay the bring up of some GPUs in Ohio because the AEP, the grid in Ohio like had some issues, right? The utility, right?

Speaker 1 With like bringing on a generator or something, right? Oh, okay, great. Well, we'll buy our own generators and put them on site.
Oh, wait.

Speaker 1 Now there's an eight-year backlog or whatever, four-year backlog for GEs, turbines. Yeah.
Oh, okay, great.

Speaker 1 I'm Elon. I'm going to buy a power plant from overseas that's already existing.
I'm going to move it in. Okay, great.
Now there's like permits and people protesting against me in Memphis.

Speaker 1 Like, you know, there's like, there's like a bajillion things that could go wrong. And labor is a huge one.

Speaker 1 I've literally had people in pitches be like, no, no, no, we've already booked all the contractors.

Speaker 1 So no one else is going to be able to build a data center in this entire area of this magnitude besides us. Because we took all the people.
We took all the people.

Speaker 1 They're going to have to fly them in, but it's like, okay, fine. Like.

Speaker 1 you can fly them in but it's like there's just like not that many electricians in america and as a result we've seen the wages rise a lot for people building data center infra there's a group of like these russian guys who used to work for yandex russia's search engine who like wire up data centers who now live in America and they get paid a ton.

Speaker 1 Like, and they get paid bonuses for being faster, and therefore they do like certain drugs to be able to finish the build outs faster because they get bonuses based on how fast they build it, right?

Speaker 1 Like it's like, there is crazy stuff going on to alleviate bottlenecks, but it's like there's bottlenecks everywhere.

Speaker 1 And it really just takes a really, really hyper-competent organization tackling each of these things and creatively thinking about each of these things.

Speaker 1 Because if you do it the layman old way, you're going to, you're going to lose and you're going to like, you're going to be too slow, right?

Speaker 1 Which is why OpenAI and Microsoft, partially like microsoft is not building stargate for open ai right is because it would have just been too slow and they're doing it the lame and old way you have to go crazy you have to go that's why microsoft rents from core weave a ton right because oh wait we we we need someone who can do things faster than us and oh look core weave is doing it faster and now like you know open ai is like going to oracle and core weave and and others right n scale in finland and all these other companies all around the world the middle east right g42 like anywhere and everywhere they can get compute because you put your eggs in many baskets and whoever executes the best will win and this infrastructure is very, very hard.

Speaker 1 Software is like fast turnaround times. Like, you know, it's still hard.
Software is not easy, but it's like the cycle time is very fast for like, try something, fail, right? Try something else.

Speaker 1 It is not for infra, right? Like, what has XAI actually done to deserve their prior funding rounds? They haven't released a leading edge model, right?

Speaker 1 And yet their valuation is higher than Anthropic today, right? At least, you know, Anthropic's racing, but whatever, right?

Speaker 1 Like, it's Elon A and B, they've tackled a problem creatively and done it way faster than anyone else, which is building Colossus, right?

Speaker 1 Like, and that's like commendable because that is part of the equation of being the best models right yeah besides the talent yeah yeah and and elon is like known for being able to get talent so it's like it's like there's there's so much complicated on the infra that you know it'd be nice to say there's one thing but yeah like the white house action plan lists a lot of things but i want like you know how do we concretely like solve the talent issue it's like there's not enough people in trade school the pay will go up and that'll help but the time scales of that are too slow like do we somehow import labor right that's how the middle East is building all their data centers.

Speaker 1 They're just importing labor. Or is there something more intelligent we can do? Robotics, right?

Speaker 1 I think I just realized today, you told me just now, like a company I seed or angel invested in, you led the round, right? Like, it's really cool for data center

Speaker 1 automation, right? Like, there's all sorts of like interesting problems on the infralayer that could be tackled and tackled creatively.

Speaker 2 Speaking of like the policy and geopolitics implication here, like what do you think about the

Speaker 2 White House implication that America needs to like export the AI stack or needs to control important components of it. It's better for us to be exporting NVIDIA chips than to foster a new industry.

Speaker 2 It's better for us to have a globally leading open source model, et cetera. What actually makes sense to you there?

Speaker 1 I want to tell a crazy story. I was in Lebanon

Speaker 2 for a week. This is a good start.

Speaker 1 This is completely unrelated, but it just popped in my head. I think it'll be entertaining.
I was in Lebanon. I was with a few of my friends.

Speaker 1 So it was like two Indian people, two Chinese people, and then a Lebanese Lebanese person, right?

Speaker 1 And these like 12-year-old girls ran up to the Chinese woman that was with us, like my friend, and they were like, oh my God, your skin's so beautiful. Do you like sushi? Right.

Speaker 1 And it's like, fine, you're just ignorant. But what was really interesting is like when they asked where we're from, we're like San Francisco, they're like, do people get shot in the streets?

Speaker 1 Because their entire worldview was built from TikTok. Okay.
Of politics.

Speaker 1 And it's like, when you think about the global propaganda machine that is Hollywood, and it's not intentional, it's just American media is pervasive. It built such a positive image of America.

Speaker 1 Now, like with monoculture broken and it's more social media-based, a lot of the world thinks America is like people are getting shot all the time.

Speaker 1 It's like really bad and it's like bad lives and people are working all the time. It's unsafe.
And like, you know, like Europe has a certain view of America and like, I don't think it's accurate.

Speaker 1 And like random Lebanese 12-year-old had a really negative view of some, like, they liked America.

Speaker 1 They loved Target for some reason because some influencers posted TikToks about Target, but like they had negative views of America.

Speaker 1 And it's like, from a sense of like, what is important is like the world should still run on American technology, right?

Speaker 1 And they generally do still in terms of the web, although, you know, ByteDance, TikTok has broken that to a large degree.

Speaker 1 But in this next age, do you want them to run on Chinese models, which now have Chinese values, which then spread Chinese values to the world?

Speaker 1 Or do you want them to have American models that have American values? Like you talked to Claude, and it has a worldview, right?

Speaker 1 And it's like, I don't know if you want to call that propaganda or what. There's a worldview that you're pushing.
Right. And so I think it makes sense that we need that worldview spouse.

Speaker 1 Now, how do you do that, right?

Speaker 1 The prior administration, current administration had different viewpoints on this right prior administration said yes we would love for the whole world to use our chips but it has to be run by american companies and so it was like microsoft oracle we're cool with you building loads of capacity in malaysia we don't want random other companies doing it in malaysian so the prior diffusion rule had a lot of technical ways in which like you know you could be you could have these like licenses and all this and it was very hard for like random small companies to build large GPU clusters, right?

Speaker 1 But it was very easy for Microsoft and Oracle to do it in Malaysia. Of course, the current administration tore that up and they have their own view on things.

Speaker 1 I mean, I think there was a lot of things wrong with the diffusion rules, right? They were just too complicated. They pissed a lot of people off, et cetera.

Speaker 1 Now they have a different view, which is like, what did they do in the Middle East, right? With the deal they signed?

Speaker 1 Well, actually, most of those GPUs are being operated by American companies or rented to American companies, right? Either or, right?

Speaker 1 Like G42 operating them, but renting them mostly to like OpenAI and such for a large part, or Amazon and Oracle and others are operating the GPUs themselves in the Middle East.

Speaker 1 So it's like, okay, that's effectively the same thing, but in a very different way. That is still, I think, a view, right?

Speaker 1 Which is like, we want America to be as high in the value stack as possible, right? If we can sell tokens or if we can sell services, we should.

Speaker 1 Okay, but if we can't sell the service, let's at least sell them tokens. Okay, if we can't sell them tokens, at least sell them like infra, right?

Speaker 1 Whether it be data centers or renting GPUs or just the GPUs physically.

Speaker 1 And it sort of like makes sense, right, in the value chain, like give them the highest value, highest margin thing where we capture most of the value and like squeeze squeeze it down to where like actually for like the bottom of the stack, right?

Speaker 1 Like the tools to make chips, maybe you shouldn't sell. And so like current export controls and policy dictate that.
Yes, you know, it's better to sell them services, but sell them both, right?

Speaker 1 Like give the option, let us compete and don't let anyone else win. I think the challenge here is that like, how much are you enabling China by selling them their RGPUs?

Speaker 1 Like how much fear-mongering around like Huawei's production capacity is there? Like how realistic is it versus not?

Speaker 1 Because of the bottlenecks of like Korea sanctions that America's made Korea put on China for memory, or Taiwan on China for chips, or U.S. equipment on China, right?

Speaker 1 Like, there's a lot of different sanctions.

Speaker 1 Many of these are not well-enforced/slash have holes, but it's sort of like a it's a very difficult argument on like how much capacity of GPUs should be sold to China.

Speaker 1 A lot of people in San Francisco, frankly, don't sell China any GPUs, but then they cut off rare earth minerals.

Speaker 1 And, you know like ostensibly most people think that like the deal was that you get you get gpus and also eda software because the administration banned eda software for a little bit just for like a few weeks basically until china was like okay we'll ship rare earth minerals you can't just ban everything because china can retaliate if they banned rare earth minerals and magnets and such car factories in America would have shut down and the entire supply chain there would have had like hundreds of thousands of people not working, right?

Speaker 1 Like, you know, like there is like, there is a push and pull.

Speaker 1 There is a push and pull here. So like, do I think China should just have the best NVIDIA GPUs? No, like that, that would suck.
But like, you know, can you give them no GPUs?

Speaker 1 No, they're going to retaliate. Like there is a middle ground.
And like Huawei is eventually going to have a lot of production capacity, but there's ways to slow them down, right?

Speaker 1 Like properly ban the equipment because it's not. There's a lot of loopholes there.
Properly ban the subcomponents of like of memory and wafers because Huawei is still getting

Speaker 1 wafers in Taiwan from TSMC through like shell companies, right?

Speaker 1 Like it's like, you know, there's a lot of enforcement challenges because parts of the government are not like funded properly or not competent enough and has never been competent, right?

Speaker 1 So it's like, how do you work within this framework?

Speaker 1 Well, like, okay, fine, we should sell them some GPUs so that they, you know, that kind of slows them down on a Huawei standpoint, although not really, right?

Speaker 1 But also like gets us back the rare earth minerals, but don't sell them too many, right? And like, how do you find that massive gray line is what the administration is grappling with, in my view.

Speaker 2 Implied in that opinion is your belief of they are going to be able to build NVIDIA and quivalent GPUs eventually, if forced.

Speaker 1 Maybe not equivalent.

Speaker 2 Sorry, price performance competitive.

Speaker 1 There's like interesting things here, right? Like if China has a chip that consumes 3x the power.

Speaker 2 But they have 4x the power, then yeah, like who cares, right?

Speaker 1 Like, you know, obviously there's a lot of supply chain challenges with building that. And it's like, hey, maybe it's on N minus two technology.

Speaker 1 It's on five-year-old technology or four-year-old technology. Great.

Speaker 1 And it only consumes 3x the power because they were able to do a lot of software optimization, architecture optimization, et cetera. They end up with something that maybe costs a little bit more.

Speaker 1 But like, when you think about the value of a GPU today, right? Like, you know, the GPUs dominate the cost of everything.

Speaker 1 But over time, services will be built out, which are high margin, right? Like, and you can go look at Anthropic or OpenAI fundraising docs and see that their API margins are good.

Speaker 1 API margins are nothing compared to what service margins will be for people who use these APIs to build services.

Speaker 1 And that's nothing compared to the net good to the economy from how much automation can happen and how much increased economic activity there is.

Speaker 1 So this is the argument of like, okay, even if their chips cost 3x as much, do you can subsidize that rationally?

Speaker 1 They can subsidize that rationally because the end goal is like, oh, wait, actually, we can deploy a lot of Chinese AI and make money and gather data because people are sending us their like prompts and all their databases and all this stuff to our models controlled by our companies, et cetera, right?

Speaker 1 Like, plus, we're just making money off of it. And they've done this in other industries, right? They rationally subsidized like solar and now no one can even compete on solar or EV.

Speaker 1 And it's like very close to no one could compete on EVs, even, right? Besides like Tesla, really. And even Tesla is adopting a lot of like Chinese supply chain, right?

Speaker 1 It is rational to say you want to have America have more AI prowess around the world, you know, so that random child in Lebanon doesn't think America is like bad or they're using American products more than China, Chinese products.

Speaker 1 But like, how you get there is very difficult. And it's a, it's, it's a hard thread to weave.
Thread, you got it. I don't croquet, you know.
Oh my God.

Speaker 1 Crochet. Crochet.
You clearly don't.

Speaker 1 Croquet is the amazing croquet's the game um i want to ask you like a wild card a question to uh finish out um we're trying to get mark to do the podcast zuck yes uh you can ask him any question what would you ask mark you got to do the podcast i thought like the like did you read the doc the page they put up i thought that was very interesting that they were like we want ai to be your companion so my question to him is not like around his infra stuff because i feel like i know most everything like you can figure that stuff out from supply chain and like satellites all this stuff but like the interesting thing i'm curious about is philosophically, what exactly like does the world look like if everyone is talking to AIs more than other people?

Speaker 1 Or if they're interacting socially with the AIs more than other people? Do we lose our human element? Do we lose our human connection?

Speaker 1 It's not the same thing as, hey, I'm posting on social media and we're interacting with our social media posts, which that already breaks the brain of a lot of people. What happens when it's like...

Speaker 1 always on your face like meta you know his worldview is like meta-reality labs makes these like devices that you wear and they're always they have all this ai on them and you're talking to the AI companion all the time.

Speaker 1 How does that change the human psyche? Like this human machine evolution, like is what are the negative ramifications of it? What are the positive ramifications?

Speaker 1 How do we, how are you going to make sure that there's more positive ramifications from this than like, you know, the slopification and like complete brain rot of like our youth, right?

Speaker 1 Which I like love my brain rot, right? Like it's like, okay.

Speaker 2 Obviously, the coding wars continue to be like very central. And we were talking about cognition's relevance and like how, how to think about the strategy here.

Speaker 2 But I do think it's really funny what flipped your bit on cognition. Can you tell the story?

Speaker 1 I thought cognition, NGMI, right? Like, you know, like OpenAI,

Speaker 1 Anthropic, XAI, et cetera, they're just going to make better code models. Like, you know, they just have way more resources.
General models will win. You know, I hadn't really met too many people.

Speaker 1 There was just like a pure vibes-based thing. And I had, you know, I'd used a little bit of Devin, but it was like, whatever, right?

Speaker 1 Like, I was like, cloud code seems better and we use that internally. But like, I went to CO2's East Meets West event.
It's an awesome event where there's people from Asia.

Speaker 1 Like, there was like, you know, all these like CFOs and CEOs of like major Chinese companies, East Coast of U.S., all these finance bros. Also, West Coast, like a lot of tech people, right?

Speaker 1 So you and I were both there. There were people from governments and major companies.
And Scott was there.

Speaker 1 I spoke with him like very briefly, but then what was interesting is like, it's like, you know, they have a poker night one night and everyone gets blasted. The like leader of CO2 is

Speaker 1 very good at poker. These hedge fund guys are just good at poker generally.

Speaker 1 I like poker as well. There's a big poker culture in the bay.
I was playing. I'm okay.
Right.

Speaker 1 But I see, I see, I look over at the super high stakes table. Scott's just dominating everyone.
Right. I'm like, what is going on? Like, how are you?

Speaker 1 Like, you're like taking chips from like CEO of major Chinese company.

Speaker 1 I don't want to name people's names because I think there's like some terms around them like naming who's there, but like, you know, it's like, you're, you're like winning like a lot of chips from a lot of big people.

Speaker 1 And it's like, all of a sudden, my vibes were like, I don't know, maybe like, maybe he can win.

Speaker 1 Maybe he can take from the lion you know uh so i was like very excited about that you know i thought it was funny uh i still have zero like i i have not done much due diligence on their code product like you know like it's like nor have i on like clog code besides the fact that we use it but it's like you know cool well i think windsurf acquisition part two is like a a pretty good hand to play here um and uh you know as somebody who invests a lot at a you know violently competitive application level yeah poker game is live man everybody they're you just imagine live players.

Speaker 1 Exactly. And so I just loved that, you know, that was how he dominated everyone.
It's like, it's like, it's such a stupid reason because I pride myself on being analytical and like data driven.

Speaker 1 And it's like, you know, vibes.

Speaker 2 Correct for any entrepreneurs listening. I think like, you know, Dylan might angel invest or we might back you fully if you if you win the cognition poker game

Speaker 2 uh and we'll host a conviction um okay we got it good awesome yeah thank you

Speaker 2 find us on Twitter at no priors pod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.

Speaker 2 That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.