The Shifting Value of Content in the AI Age with Cloudflare CEO Matthew Prince
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @eastdakota | @Cloudflare
Chapters:
00:00 – Matthew Prince Introduction
00:37 – Cloudflare’s Role in Securing the Internet
02:08 – The Road to Cloudflare’s Dominance
03:20 – The Internet’s Shift from Search to AI
06:34 – Role of Agents and Content on the New Web
09:44 – Reshaping the Content Market Online
13:05 – De-emphasizing Traffic as a Proxy for Value
18:04 – Will We Run Out of Quality Human-Generated Content?
20:01 – Scaling the Value of Content in the AI Age
22:32 – Cloudflare’s Approach to Inference
24:55 – How Cloudflare Responds to Market Demand
26:04 – Open vs. Closed Models
27:21 – Path to the New Marketplace for Content
30:58 – Advice for Content Creators
32:47 – Exploring the Timeline for Running Models Locally
40:07 – The Future of Agentic Infrastructure
44:52 – Conclusion
Press play and read along
Transcript
Speaker 2 Matthew, thanks so much for being here.
Speaker 1 Thanks for having me.
Speaker 2 So I want to get right into the juicy topics, but make sure our listeners understand the scale and current role of Cloudflare first.
Speaker 2 So correct me if I'm wrong in any of this, $66 billion market cap company today, about $1.8 billion in trailing revenue, and then the biggest CDN by far with a bunch of different products now in security in particular.
Speaker 2 Like what else should our audience understand about the role Cloudflare plays?
Speaker 1
Not to nitpick on one thing, but we've never really thought of ourselves as a CDN. We started out very much as a security company.
The whole thesis was could you put a firewall in the cloud?
Speaker 1
We saw that servers were going to the cloud. We saw that software was going to the cloud.
It seemed inevitable to us that the networking equipment would go to the cloud.
Speaker 1 And the big objection that everyone had was you were going to slow things down. And so we worked very hard to figure out how we could not slow anything down.
Speaker 1 And the goal was just to get back to parity. Turned out we were a little too good at our jobs, and everything got a lot faster.
Speaker 1 And so, yes, we've ended up competing in the CDN space, but really, what Cloudflare is, is what the network should have been, what the internet should have been, had we known in the 60s, 70s, 80s how important it was going to be.
Speaker 1 So, how can we make it faster, more reliable, more secure, more efficient, more private? And that fundamentally is what we're working on every day at Cloudflare.
Speaker 2 How long has it been since you guys started the company?
Speaker 1 We launched in September of 2010, so we'll be coming up on our 15th year in September of this year. Amazing.
Speaker 2 And I don't think there's a way to ask this question without like somewhat trivializing the journey, but like, how did you become so dominant?
Speaker 1 I don't know. I mean, I think we just focused on
Speaker 1 how did we do the right thing for our customers? How do we solve the problems that were there?
Speaker 1 And, you know, at some level, the story of Cloudflare is that we have been customer zero along along the entire journey. So, every
Speaker 1 thing that started from could we take a firewall, put it in the cloud? How would we get the data to populate that firewall? We had to have a free service.
Speaker 1 Once we had a free service, all of a sudden we had to be able to figure out how to scale
Speaker 1
enormously across millions of customers in an efficient way. That meant that we had a whole bunch of weird stuff that was using us.
We got attacked by every which direction.
Speaker 1 We had to build a public policy team in order to deal with those issues. We had to build our own security.
Speaker 1 Someone almost hacked or stole our domain at some point as a way of hacking into us. So the next thing you know, we built our own registrar.
Speaker 1 And so to some extent, Cloudflare has been about, you know, start with a relatively simple idea, make it as broadly available as possible, and then solve all the problems that become sort of inherent once you've done that.
Speaker 2 Now you're in the position that I believe you've described as like the internet's traffic cop.
Speaker 2 I think a lot of people feel with the sucking sound of attention toward AI assistance that the shape of the internet is changing.
Speaker 2 What is your, before we go to like point of view on what to do, what is your prediction for what's happening?
Speaker 1 So no matter what,
Speaker 1
the dominant kind of value creation model of the last 30 years of the web has been search. Search drove everything.
It drove all of what you did online. Entire industries grew up around that.
Speaker 1 And the real three ways that you could derive value on the web in the past were you either sold a thing, whether that was a subscription or a product or something else, you sold ads against some content, or, and I didn't say the business model, but I said the value creation model, because the third part is really important, which is a lot of people just created content for the ego of knowing that other people were doing it.
Speaker 1 The old adage is only two reasons why people create content to get rich or to get famous. A lot of people are just doing it to get famous.
Speaker 1 And that's a lot of what drove i mean that's what drives wikipedia that's what drives a lot of content creation that is on the web i think that the web is shifting now to a new interface it's shifting away from search and it's shifting to ai and we can see that through the trends in terms of google usage we can see that in terms of like our own usage where more and more people are turning to these ai systems where they used to turn to google even google itself is sort of morphing into an ai company kind of in their in their interface before our eyes.
Speaker 1 And as we do that, the natural thing that's going to happen is we're going to consume derivatives rather than consuming the original content itself.
Speaker 1 A study that just came out from Pew Research that says that if Google puts an AI overview on the top of search, it is much less likely that people click on the links.
Speaker 1 And that seems sort of like a duh, but the data that we have also substantiates that and shows that compared with 10 years ago, it's become 10 times harder for the same piece of content to get a click from Google than it was before.
Speaker 1 And that's because the answer box, that's because of AI overviews, that's because of the search interfaces has gone there. And that's the good news for content creators.
Speaker 1 In the case of someone like OpenAI,
Speaker 1 it's 750 times harder than it was with the Google of old. With the case of Anthropic, it's 30,000 times times harder than the content of old.
Speaker 1 And so what I worry about is that if the value creation model of the web has been all about how do I get traffic, the new interface of the web isn't going to send you traffic.
Speaker 1 And if that's the case, if content creators can't get value from selling a thing or a subscription, selling ads, or just the ego that they get knowing someone is reading their stuff, I worry that people aren't going to create content.
Speaker 1 And that's going to really not only starve the web, but it's actually going to starve even the AI companies that are using that content as effectively the fuel for their engines.
Speaker 3 How do you think that evolves?
Speaker 3 Because if you look forward, the other thing that people are talking about a lot right now is agents and the fact that you're not only going to get information through an AI, it'll actually go into actions on your behalf.
Speaker 3 So the time you actually spend on the web is going to go down effectively, or at least you're going to be dealing with one interface, which is this agent that goes off and does things in the background for you.
Speaker 3 So do you think that ultimately the AI companies will start paying for content? Do you think there'll be other ways to monetize it?
Speaker 3 Do you think it's a completely different model starts to emerge in terms of how the web works?
Speaker 1 There are going to be different solutions for different pieces of the equation.
Speaker 1 At some level, agent kind of commerce is going to be probably the easiest of these to solve, where there are going to be certain companies that say, listen, we'd love your agent to come and
Speaker 1 buy a widget from us. There are going to be others that right now are aggregators of information that the agents can actually disintermediate or disaggregate that content.
Speaker 1 And they'll be actually quite threatened by that. But ultimately, I think commerce, agents and AI is probably ultimately pretty good.
Speaker 3 You have to separate commerce from content, and content is what you're worried about.
Speaker 1 Content is a different piece, where content, the problem right now, is the default assumption has been that you get content for free.
Speaker 1 And it's actually interesting, a lot of the content creators are looking to the law as the solution to this. And generally, I'm a recovering law professor.
Speaker 1 So, you know, pardon me for going down this weird tangent, but it's actually, I think, really interesting, which is
Speaker 1 in copyright law, the more that you are a derivative as opposed to a direct copy, the actually the safer you are, the more likely you are to fall under fair use.
Speaker 1 And we've actually seen a number of court cases, two that happened here in California,
Speaker 1 that happened within a week of each other, one of which basically said AI uses of content is fair use, the other one which said it wasn't. There's going to be a whole bunch of things around that.
Speaker 1 But probably the more sensible one is the more that you're creating derivative content, the less likely it is to be a copyright violation.
Speaker 1 But kind of opposite of that, the more that you're creating derivative content, the more likely it is that someone isn't going to go back to that original source.
Speaker 1 And so I actually worry that a lot of the content creators are focused very much on what the law says today and on copyright law, which may not come out in their favor because
Speaker 1 it is actually protecting those derivative uses. I think what we have to figure out is probably a different business model where content creators get compensated.
Speaker 1 And I think the good news is as you talk to the big AI companies, and 80% of the major AI companies are Cloudflare customers. So we have good relationship with them.
Speaker 1
We talk to them about that all the time. What they all say is, you're absolutely right.
We should be paying for content.
Speaker 1 The devil is in the details, though, because what they all are desperately worried about is how do you make sure that it's a level playing field? They all believe their technology is the best.
Speaker 1
They all believe that on a level playing field, they're going to win. But they're really worried.
Well, if Google still gets content for free, but we have have to pay for it, that doesn't seem fair.
Speaker 1 Or if I'm paying for it, but somebody else gets it for free, that doesn't seem fair. So what we've been really working on is how can we create that really level playing field?
Speaker 1 And we think if that's the case, that AI companies will actually be quite willing to pay for that content.
Speaker 3 What are the approaches you've been taking to try and level the playing field here?
Speaker 1
So in order to have an economy, you have to have a market. In order to have a market, you have to have scarcity.
Like no markets exist without some level of scarcity.
Speaker 1 And the problem right now with content is that content, there is no scarcity. They're giving it away for free.
Speaker 1 And so we spent the last year working not across, only across Cloudflare's existing customers, but then going across the entire publisher ecosystem, writ large, not just print publishers, but video, audio, music, you know, across, you know, film, across the entire spectrum, and saying, you know, we think that there's a problem with AI, that it's starting to actually take value and not give you anything back.
Speaker 1 And across the board, for every publisher from the Associated Press to Ziff Davis and everything in between, we've seen just incredible resonance with that message where they're all saying, you're absolutely right.
Speaker 1 Our business is getting astronomically harder over just the last six months, and we're seeing less and less of our existing business model working. So we need to do something about that.
Speaker 1 And so what we did on July 1st was we announced what we called the Content Independence Day, where you could actually have independence from these AI companies.
Speaker 1 And we, for free, across all of our customers, whether they paid us or they didn't pay us, we started blocking by default any training that was being done by any AI companies that was there.
Speaker 1 And it was really important that we focused on that because that meant that we could treat Google the same as everyone else.
Speaker 1 Now what we're doing is we're working with the IETF and other standards organizations to say, let's define how you have to announce what your crawler is doing as it behaves online.
Speaker 1 And we're really encouraged by the early work that's there.
Speaker 1 As that happens, we think we'll be able to set in place really fine-grained permissions for content creators or anyone else to say, here, you know, humans can get my content for free, but robots have to pay for it, and then figure that out.
Speaker 1 That first step of creating scarcity is what you have to do in order to figure out what the market is. And then after that, I think figuring out the market, that's going to be what takes some time.
Speaker 1 And I think we're still experimenting with this.
Speaker 3 It's super interesting, because if you look in the sort of search precedent, we had a robots.txt file, and that's where you kind of specify whether a search engine could come and crawl the content.
Speaker 3 And it sounds like you're really extending some of those concepts on through to the AI layer.
Speaker 1 Yeah, that's right. And I think robots.txt was a relatively simplistic and blunt tool, where today,
Speaker 1 it basically says you can either allow something or disallow something. And you can basically do it either, you can do it on a directory on your site or to the entire site.
Speaker 1 But there's not that sort of fine-grained control. And so we think robots.txt is sort of like the street signs that are on the road.
Speaker 1 A lot of people don't necessarily follow the the speed limit, though. And we actually see plenty of examples.
Speaker 1 In fact, some really prominent companies that do some very, very, very shady things where they basically say, absolutely, we follow the rules of the road.
Speaker 1 But when push comes to set up, if it turns out they're blocked, then all of a sudden they're doing a bunch of things that look not dissimilar to what we see Russian hackers or Iranian hackers do in order to try and get around those blocks.
Speaker 1 At Cloudflare, we're really good at identifying that and stopping it. And we're also really good at embarrassing those companies that do that.
Speaker 1 So, you know, watch our blog, and I have a feeling that some prominent AI companies that are misbehaving are going to get called out pretty soon.
Speaker 2 What do you think that, you know, should the idea of a marketplace for like contributing to training work out?
Speaker 2 What do you think that does to the landscape of like the types of content companies that win or lose?
Speaker 2 Like, I can't imagine it's going to look like it does today because there's some notion of like incrementality.
Speaker 1 I mean, this is going to take us down a little bit of a tangent, but I think a lot of the things things that are wrong with the world today are ultimately Google's fault. They're not the worst actor.
Speaker 2 I'm going to start with we're all friends here and then it's all Google's fault.
Speaker 1
Yeah, it's all Google's fault. I think Google has been a net force for good in the world.
I think that they actually believe in ecosystems.
Speaker 1 I think they're trying to do the right thing, but they taught everyone to worship, if they're content creators, sort of a deity which is traffic.
Speaker 1 And that was the proxy for value. It was how do you generate the most traffic?
Speaker 1 And that led to Facebook as the next iteration, led to TikTok, led to folks like The Huffington Post, which would literally write a piece of content and then A-B test headlines trying to figure out which one generated the largest cortisol response to get the most clicks.
Speaker 2 Or if you guys remember Demand Media.
Speaker 1 Demand Media. I mean, BuzzFeed.
Speaker 1 I mean, there's a whole bunch of folks that were just trying to figure out how do we actually stimulate rage and get people stirred up so they'll click on the thing so that I can either sell them a subscription or sell ads against against a piece of content.
Speaker 1 And again, I think that that led to a lot of me tooism, that led to a lot of people writing the same story with sort of a slightly different bend.
Speaker 1 I don't think it led to a lot of us actually figuring out how to advance human knowledge.
Speaker 1 And so what I think is interesting is if you think about the AI companies en masse, they're a relatively good approximation for the sum of human knowledge.
Speaker 1 Not perfect, but probably the best we've ever had, right? Where they come together. And the reality is that they are,
Speaker 1 in aggregate, they're like a giant block of Swiss cheese, where, yeah, there's a whole bunch of cheese there, but there are holes in the cheese as well.
Speaker 1 And their very algorithms, as they come across a piece of content, they prune off that content, which is already kind of part of the meaty part of the cheese.
Speaker 1 Whereas the parts that are in holes are actually super valuable to them.
Speaker 1 And so I actually think that if we could create a market where you're rewarding content creators not for who stimulates the most cortisol, but who fills in the holes in the cheese, and you actually pay people for that, that that is a better outcome.
Speaker 1 And that's actually advancing human knowledge. And that's really amazing if we can.
Speaker 3 Isn't that kind of arguably companies like Mercor or Surge or Scale, as they do data labeling and they hire human experts to basically fill out content areas for AI companies?
Speaker 3 So do you basically view this as like a distributed model of that or sort of a web-based?
Speaker 1 I've spent the last year talking to a lot of people. One of the more interesting conversations that I had
Speaker 1 was with Daniel Ack. I flipped to Stockholm and
Speaker 1
saw Daniel. And I think there's really nobody who has compensated content creators at scale like Daniel has.
And it's amazing.
Speaker 1 The day before iTunes launched, the music industry was about an $8 to $9 billion
Speaker 1
industry. Spotify on its own today pays out over $10 billion a year to the music industry.
And so done right, these can be very much pie expanding.
Speaker 1 There's plenty of cheese to go around
Speaker 1 if we do this correctly.
Speaker 1 And the thing, I remember he was telling me a story which I thought was sort of in the same vein, which was Spotify actually looks at queries that people have run that they don't have good answers for, where there's, you know, somebody searches for, I don't know, I want a disco song about like how fun it is to to dance with your dog, right?
Speaker 1 I don't know. And if somebody searches for that and it doesn't get it, they actually publish that list back to content creators and musicians.
Speaker 1 And there are several musicians that are making literally tens of millions of dollars a year just writing songs for what people care about listening to
Speaker 1 that are unmet demand. And so I think that it's not exactly the same as data labeling.
Speaker 1 I think it's actually saying, like for the first time in human history, we can actually very accurately identify where there are holes.
Speaker 1 That's the very nature of the pruning algorithms that these LLM models are trained on.
Speaker 1 And that we could then, if we sort of basically resurface that and say, hey, this is a, we don't have enough articles about, you know, the wing-toed ferret that people will actually go out and do that.
Speaker 1 And that, if we can then compensate people on that, that actually is much better than yet another article about what's happening in Washington, D.C., yet another article about, you know, how much San Francisco is on decline or on the rise.
Speaker 1 I mean, again, that's not actually adding to to human knowledge. That's just
Speaker 1 rage bait, effectively.
Speaker 3 How do you think that plays out?
Speaker 3 As so, if you look at some of these labeling companies that also then hire experts in to sort of provide some of the at least expert content that you mentioned, if you look at some of the models like MedPalm2 from Google, which is a couple years old now, it outperformed human physicians, the average human physician in terms of output.
Speaker 3 So if you rated its output against people, at what point do you think we've run out of good content from people? In other words, there is some limit.
Speaker 1 I don't think that's true. I mean, mean, I do think that there will be some,
Speaker 1 like there's always going to be people running new experiments and new tests and finding new things and new discoveries.
Speaker 1 And yeah, maybe we can imagine some distant future where it's all robots that are doing this in the labs, but that's a long ways off.
Speaker 1 And so in the meantime,
Speaker 1
I think we can do that. My black mirror kind of version of the future, though, is actually one where we're not going to get rid of journalists.
We're not going to get rid of scientists.
Speaker 1 We're not going to get rid of researchers. You're going to still need that work.
Speaker 1 What I worry about is if we don't figure out how to compensate broadly content creators who are independent, that we actually go back to almost a time in the Medichees, where the web had historically been this incredible sort of distributor of value creation and knowledge creation.
Speaker 1 You could imagine a world in which all of a sudden you have five big AI companies. You have the conservative one and you have the liberal one, you have the European one and the Chinese one.
Speaker 1 And they all actually hire and run their own team of journalists, researchers, academics, the experts that fill in the
Speaker 1 sort of holes in their cheese. And again, that's not too hard to imagine that in some not so distant future that becomes a thing.
Speaker 1 I hope is that we figure out a way to compensate independent content creators and share that knowledge across all of them as opposed to creating these silos of knowledge behind each variation of
Speaker 1 an LLM.
Speaker 2 Do you feel like the large labs agree with you on how much can be paid out to creators to fill those holes?
Speaker 2 Because you look at the scale of ad revenue, I mean even ignoring things like commerce and whatever from the open web, but
Speaker 2 in aggregate, what's been paid to labeling companies, like $10 billion less? We're like really far off. if people are starting with a very large, free, free today base.
Speaker 1 Well, again, I'm not sure labeling companies is the right, i mean is is labeling companies the right the right model or is it you know gpu spend or is it uh is it is it employee spend you know i actually think you know first of all the amount that's paid to labeling companies will go up um the amount that's penned paid to employees and then and then gpus is is continuing to go up and so the question is how much value is content actually giving you and and the answer is you know somewhere between zero and a hundred percent right?
Speaker 1 And is it more or less than another unit of GPU time?
Speaker 1 I mean, there's a market that can, that can figure that out.
Speaker 2 From that first principles view, I say it.
Speaker 1 Yeah. And so you, so you've got, so there is some value which is there.
Speaker 1 I think the mistake that a lot of content creators did was they actually did deals that don't scale as the as the business models of the AI companies scale with them.
Speaker 1 So if you do a deal that's like $20 million and you get all my content, I mean, that's an incredibly naive deal, right?
Speaker 1 It might seem like a great deal to the content providers day one, but it's exactly the opposite of what you want to do.
Speaker 1 What you really want to do is say, okay, you know, if you imagine that there were a way to, for all of the content that is available, you say, here's how much that is creating value.
Speaker 1 That's going to be some percentage of whatever the subscription fee is for your AI model. Or if you're an ad-supported AI in the future, it's going to be some percentage of that.
Speaker 1 And then as the AI companies grow,
Speaker 1 which will inherently then mean the ad revenue shrinks, that you share in that upside as your downside gets diluted. And that,
Speaker 1 I think that, you know, there's still going to be advertising out there. There's still going to be subscriptions.
Speaker 1 There's still going to be tentpool content that people just have to consume, even if they're AIs.
Speaker 1 But what you also want to do is allow that content to get into the AI systems and the content creators should get compensated for that.
Speaker 1 And again, a market will determine, if there's scarcity, a market's going to determine how, how valuable that actually is.
Speaker 3 One of the things that Cloudfair is known for, to your point, is really speeding up web pages and the internet.
Speaker 3 And as we shift from serving pages to sort of models being run, you're kind of shifting from a world of like caching and serving pages to like inference.
Speaker 3 How do you think about that in the context of Cloudfair or some of the directions that you all are going?
Speaker 1 Well, I mean, I think we leaned in heavily. I mean, nobody remembers this, but back in 2020, we partnered with this, you know, graphics chip company in order to put
Speaker 1
GPUs at the edge of our network in order to allow people to do inference. And by the way, it was crickets.
Like we launched this product, no one responded. It wasn't a single sales inquiry to use it.
Speaker 1 And so we apologized to the partner who happened to be NVIDIA and kind of went on our way. Four years later, the market was ready for it.
Speaker 1 We basically just took out the same press release and issued it again. And then, you know, it's taken off like gangbusters.
Speaker 1 I think that we've leaned in heavily to, you know, we believe that a lot of inference is going to happen on your end device, but there will always be some model which is too big or too resource intensive.
Speaker 1
And in that case, the next best place to run it is going to be on, at the inside the network at the edge. And that's what we're delivering.
More importantly, I think that
Speaker 1 if you look at whether it's MCP or whatever the next protocol that connects agents to
Speaker 1 services and allows these things to connect, inherently, because of how much of the internet we sit in front of, they have to pass through us.
Speaker 1 And so we're investing heavily behind those protocols, making sure that they have all of the security, the underlying Rails and payments infrastructure and everything else that you need.
Speaker 1 And my hunch is that what we solve in the content space and the Rails that we create for the payments there very naturally then become one of the models to do sort of agent to agent over MCP or whatever the final protocol becomes payment infrastructure to be able to handle that as well.
Speaker 1 So Cloudflare fundamentally is a network.
Speaker 1 And I remember when cryptocurrency and blockchains and everything were getting big, people are like, what are you,
Speaker 1 aren't you worried about this? And I'm like, they still need a network.
Speaker 1 As AI gets big, like, still needs a network. And so I think we sit in the center of this.
Speaker 1 And as you especially have more agent-to-agent communication, I think the network actually becomes more and more important.
Speaker 2 Is there a bet you're making on like what changes in terms of models or compound systems that drive more model traffic to the architecture you described, where it's in network versus in large data center today?
Speaker 1 I wish I could say we were that strategic. I mean, I think
Speaker 1
we go to wherever the market demands that we go. And so including build a neocloud.
I don't even know what a neocloud is, but sure.
Speaker 1 So I think we're fundamentally
Speaker 1 always just trying to say, how do we respond to whatever either our own team needs as customer zero or what our customers need and the fact that again 80% of the AI companies are using us they are constantly pushing us to can you do this can you do that and and I think our team is has been uniquely good at being able to execute and innovate and stay at least you know up with whatever the trends are and that's again I think
Speaker 1 I'm proud of the fact that we have ended up in a lot of these these conversations and that so much of the internet you know does flow through us that one way or another, I think that we end up being in the center of a lot of these transactions.
Speaker 2 Does that imply any particular belief around open or closed models as people continue to develop capabilities?
Speaker 1 We have closed models that run on us.
Speaker 1 We have a lot more open models that run on us.
Speaker 1 I tend to,
Speaker 1 we have historically been a company that believes very much in open source. And most of the things that we build internally,
Speaker 1 as long as we can, we try to open source all of that technology. And so I tend to be in the pro open models.
Speaker 1
We work very closely with the meta team and Lama and everything that they're doing. But again, I think there's going to be different flavors of this.
And
Speaker 1 again,
Speaker 1 we're happy to have customers in either end of that spectrum. I'm a little bit
Speaker 1
quite skeptical of the, if we allow open source models, the world is going to end arguments. That seems histrionic to me.
There are things we should worry about.
Speaker 1 Like it is, you know, I think some of the, you know, synthetic pathogens and other things that can be created.
Speaker 1 But it seems to me like the place to regulate that and control that is in the machine that can actually print the pathogens, not in the AI model that can come up with
Speaker 1 what it is. That seems like a pretty flimsy argument for why we shouldn't have open source.
Speaker 2 What needs to happen for your view? Sorry, I'm still going back to like shape of the web. What needs to happen for your view of like a marketplace for content to emerge, right?
Speaker 2 Like what are the next signs that this is actually like happening?
Speaker 1 Well, I think the very tactical next step is we've got to get Google to not
Speaker 1 be a special snowflake. Because Google has had such a dominant position in search,
Speaker 1 they almost believe that it is their right to have access to content without having to pay for it. And so the conversation that we have with them is, we get it.
Speaker 1 The deal that you made with content creators in the past was they give you their content and you send them a certain amount of traffic.
Speaker 1 You over time have taken just as much of their content, but you've sent them one tenth of the traffic that they have.
Speaker 1 And if we just plot those trends out going forward, it's going to become, you know, a smaller and smaller picture. At some point,
Speaker 1 the content creators will say, we're just going to block Google, right? Now, that seems
Speaker 1 that was unthinkable 10 years ago. It seems radical six months ago.
Speaker 1 it is what people are talking about today why google is so important is when you talk to all of the other ai companies google is the one company that they're the most afraid of and that the reason that they're most afraid of them is because they think that they have privileged access to content in a way that is much more difficult for them to do and so what i think we have to be able to do first of all is to to say to google listen you can do still do search indexing But if your bot is taking content and then transforming it in some way, making it to the answer box, making it into AI overviews, turning it into Gemini, that's different action.
Speaker 1
That's a different deal. And you have to be in the same bucket as everyone else.
And Google's going to resist that. Now, I think the good news is they really do believe in the ecosystem.
Speaker 1 I think they are trying to do the right thing that's out there.
Speaker 1 And, you know, maybe not this is the good news for Google, but it's the good news, I think, for the web, which is that they have a ton of both regulatory and legal and legislative pressure, which is coming down on them.
Speaker 1 So one way or another, I think we will flatten that out. Once that happens, I think that's when we can actually start to say, we're going to shut off access to content unless you pay for it.
Speaker 1 In the beginning, most of the deals that are done, the actual money being changed will be between large content producers and large AI companies.
Speaker 1 That's happening right now, where Condi Nast or Dot-Meredith or the New York Times or Reddit is doing a direct deal with a large model company. That'll happen a bunch.
Speaker 1 Where I think we can play a role is when you have either a large content provider trying to make a deal with all of the AI startups that are out there, which they really do want to do and they want to do it in a way that scales, but they can't do one-off deals in those cases.
Speaker 1 Or you have the long tail of content with all of the different AI companies. In both of those cases, I think Cloudflare can play a role in helping set what are sort of basic rates that are there.
Speaker 1 And how that model looks, you know, I'm not sure.
Speaker 1 It might be that we negotiate basically on behalf of a number of the content providers with all of the different AI companies, basically a pool of capital, much like how Spotify does and then distribute that out.
Speaker 1 It might be micropayments every time you access a piece of content. It might be that training is actually a different payment rate than search.
Speaker 1 That's, I think, something that we'll have to figure out. But step one is we've got to get Google to play by the same rules that every startup,
Speaker 1 every
Speaker 1
other company is playing by. And the minute we do that, I think the rest of the marketplace will actually happen a lot faster than you think.
For
Speaker 2 any content company or individual providers, since that used to be a big part of the web,
Speaker 2 that cannot predict today, like there's no business model for them today to make money off of content going into these AI experiences.
Speaker 2 And they can't, it's not easy to predict what is incremental to models. What advice would you have for them?
Speaker 1 So I think the first thing is
Speaker 1
you've got to get back to controlling your content. So you have to create scarcity from the beginning.
So how do you make sure that you're not just giving your content away for free?
Speaker 1
And again, we've made that easy. There are other companies that are working to try and make that easy as well.
And so one way or another, create scarcity and then start to have conversations.
Speaker 1 You can see which AI companies are the most likely to deal with it.
Speaker 1 So just today there was news that Google is starting a pilot project to start to pay news providers, something they swore they would never do. But again, I think that they can see,
Speaker 1 and because they do believe in the ecosystem, they can see that this has to happen. If the incentives for creating content go away.
Speaker 1 If you can no longer sell something, if you can no longer sell ads against something, if you can no longer even get the ego hit.
Speaker 1 Because if people aren't going to the original source, you don't even know.
Speaker 1 If you write some incredibly influential piece that ends up in millions of AI responses, you don't actually ever even know that happened. You're yelling into the void.
Speaker 1
We've got to figure something out around that. around that piece.
So I think the first step for content creators is recognize that
Speaker 1 the business model of the web is changing. Second, recognize there is something you can do about it, right? You can actually create this scarcity.
Speaker 1 And then third, actually participate, start to go out and say, based on the data, hey, you keep trying to crawl my stuff.
Speaker 1 Let's figure out a way that we can have some fair exchange of value for that.
Speaker 3 One other thing that you mentioned, sort of
Speaker 3 a little bit as a side note when we were talking earlier, was around how you felt that a lot of the models would actually be running on device and running locally.
Speaker 3 And then obviously there'd be things on the edge or in the cloud that would be the bigger models, perhaps doing more complex tasks. When do you think that'll happen?
Speaker 3 Do you think that's based on when the device is advanced in certain ways? Is it model size? Is it something else?
Speaker 1 Well, I think a lot of it's happening today. You know, on your phone, there's a lot that your phone is doing locally without it having to go out.
Speaker 1 And there are certain places, certain applications, where it has to be local.
Speaker 1 If you have a driverless car and there's a red ball bouncing through a yard with a little girl running after it, whether to hit the brakes or not can't be dependent on network conditions, right?
Speaker 1 So that has to run locally. I think that the big place
Speaker 1 where
Speaker 1 I think there's going to be exciting innovation that does not feel like it will be today is really in just how do you take, especially on the inference side, making it significantly more power efficient.
Speaker 1 That ends up being the biggest limiting factor. Apple has shown that it's possible and that you can actually have relatively power efficient GPUs and TPUs that are out there.
Speaker 1 When we talk with the folks at NVIDIA, it feels a little bit like talking to Intel back, you know, in our case in 2010 or Apple's case in 2005, where they were like, you're doing it wrong if you care about power efficiency.
Speaker 1 I remember sitting in
Speaker 1 Intel's research lab outside of Portland in 2011 with all, and we were a tiny little startup,
Speaker 1
but we were doing interesting, innovative things. And so then we were using their chips.
And so they brought me in. And I was like, the only thing we care about is cores per watt.
Speaker 1
And we just need as many cores per watt as you can possibly deliver. And they just kept saying, you're doing it wrong.
You should be water cooling your systems or doing these things.
Speaker 1 And we kept trying to explain, like, we don't have that luxury.
Speaker 1 We have to go into what are oftentimes the oldest, most legacy data centers in the world where there is a relatively limited power envelope. And we've got to fit within that.
Speaker 1
I think the same thing is going to happen in the AI space. And I'm very hopeful.
NVIDIA has been a terrific partner to us, but it's been sometimes frustrating to see how, you know,
Speaker 1 more, more
Speaker 1
GPU capacity comes along with having to stand up your own mothballed nuclear power facility. Like that can't be the solution.
And there's no physics reason why it needs to be.
Speaker 1 And so I think that we're actively looking around the ecosystem, trying to figure out who can deliver the most
Speaker 1 tensor units per watt or
Speaker 1 whatever the sort of GPU equivalent is.
Speaker 1 And that, I think, is going to be the big unlock that allows you to have more running on your device, whether that's your phone or your driverless car, or frankly, at the edge of the network.
Speaker 1 Because again,
Speaker 1 we also have to live within a power envelope, which is not the same as if we were standing up a
Speaker 1 100-megawatt data center.
Speaker 3 Yeah, I was kind of thinking of it, I guess, from tier three perspectives. I mean, to your point, there's the actual chips.
Speaker 3 And in the context of mobile, obviously there was ARM and then Qualcomm as other approaches to basically get to some of the things that you're mentioning for devices.
Speaker 3 Separate from that, there's actual model size and inference time and a few other things that are kind of overlapping but different. Yeah.
Speaker 3 And so to some extent, it's how large of a model that's how performant can you actually load on a device and when does that happen and how well can it run?
Speaker 3 And so I was just a little bit curious how you thought about all those different pieces because there's also the model component or side of that that seems to matter quite a bit. And it's all coming.
Speaker 3 I mean, it's all inevitable. I'm just sort of wondering about
Speaker 1 that. I think that
Speaker 1 we are still, most of the AI companies are still
Speaker 1 relatively inefficient in terms of their utilization from everything that we can see. And so when we,
Speaker 1 it is in our interest, based on just how our business model works and everything else, to make inference, to make anything we do as efficient as possible.
Speaker 1 Because we only charge customers based on the actual work that we do.
Speaker 1 We're different than the hyperscalers. Like the hyperscalers, you go out and you rent a GPU.
Speaker 1 They don't care if you use it or you don't use it. There's no incentive actually for them to make.
Speaker 1 In fact, if tomorrow someone announced that they had made inference 100 times more efficient, that's great news for us.
Speaker 1 It's terrible news for the hyperscalers because our business models are very different.
Speaker 1 I think that the great lesson of DeepSeek, I just wish that it had been a group of students out of like Hungary that had designed it, not out of China, because there were some really significant, innovative steps that they took to make essentially training and inference significantly more efficient.
Speaker 1 We all got distracted in the U.S. by the fact that it was China and, you know, was it real or not real or anything else? It was real.
Speaker 1 There was really great science that was done there to be more efficient.
Speaker 1 I think we've just barely scratched the surface on what we can do around model compression, what we can do around really just, you know, much more efficient pruning.
Speaker 1 There's a ton in these models that are branches of the tree that just literally,
Speaker 1 the probabilities of going down it are so incredibly low that you can prune those branches off fairly efficiently and still get incredible performance.
Speaker 1 So I think we have an enormous amount of the kind of hardcore computer science, which is different than kind of the hardcore AI science to do to just say, how can we now take these things and make them massively more efficient?
Speaker 1 And my hunch is that, you know, it is not particularly long before you're running something which is the equivalent of kind of the current generation of Chat GPT on your iPhone or at your Android device.
Speaker 2 Does that advancement
Speaker 2 happen at the model providers, in open source, at an infrastructure provider?
Speaker 1 I mean, it's hard to predict where it is. I mean, I know that that's like we're not investing in how we build a frontier model.
Speaker 1
That's not our job. We are investing on how we take any models that we're running and run them significantly more efficiently.
So that's how we think about it.
Speaker 1 I think the question is, who's going to be the vmware
Speaker 1 of ai right and who's going to create kind of that ability to just because we haven't even done that basic work like today
Speaker 1 for the most part if you want to you know spin up a uh a a gpu you're taking an entire you know vm there's not even a container because you've got to get that low level or you're taking an entire machine that is is running and that's extremely expensive and in most cases with most of the hyperscalers you actually have to to get anything close to attractive pricing, you have to commit to a year of that.
Speaker 1 And now it's up to you as a customer to figure out all that efficiency. Someone will come along and figure out a way to say, here's how we can slice these things up.
Speaker 1 Here's how we can make them more efficient.
Speaker 1 And we're just going to speed run essentially the CPU efficiency gains, including a lot of the security things like the Spectre and other attacks, the speculative attacks that you had in CPUs.
Speaker 1
We'll come to GPUs. All that stuff is coming.
We're going to speed run the last 30 years of CPU efficiency gains in the next five to 10 in GPUs.
Speaker 3 I think fundamentally, there's a lot of really interesting next-gen models around physics and materials and efficient things that may actually be interesting.
Speaker 3 Obviously, a few really odd future-looking things on the infrastructure side, I think, will be important.
Speaker 3 You've probably been following a lot of the sort of agentic-related infrastructure that actually is necessary for multi-step agents.
Speaker 3 So, I think there'll be a few big companies there, and then there's all the vertical app things.
Speaker 1 We will probably compete in that agentic infrastructure space.
Speaker 1 It will probably not be one provider. It will be something where you're going to have to have a whole bunch that actually work together in some way.
Speaker 1 And so figuring out the standards behind that, I think, is going to be important.
Speaker 1 And it's, you know, I think that whether it's MCP or Google did their own flavor of it, which is sort of just like, it felt very Microsoft-y kind of embrace and extend. But
Speaker 1 all of that space is going to be.
Speaker 3 Sort of like temporal land graph. All those things are kind of early indicators of like new agent tech infrastructure that's going to be
Speaker 2 like an entire, very large domains
Speaker 2 where a lot of the architectures probably apply.
Speaker 2 And like the data collection, like efficient data collection is the question because we're not going to get robots from Common Crawl, but we are probably going to get them.
Speaker 2 We just have to figure out how to pay for the data.
Speaker 2 And so I think figuring out if there are interesting models to get to generalization in these other domains is just something I'm looking at.
Speaker 1 The other thing that's going to be really interesting is, you know, I was I was actually kind of a pretty much a
Speaker 1 skeptic around like the blockchain cryptocurrency.
Speaker 1 It may be that this shift is the is the thing because now we're looking at this and we're thinking, okay, okay, let's say we got to a place where it was actually micro payments for every page view.
Speaker 1 I mean, we do
Speaker 1 something like 15 trillion requests every day.
Speaker 3 It's an obvious use case.
Speaker 1 But how you then scale these things to be able to work that way? I mean, you can't do that with Bitcoin, right?
Speaker 1 And you can't even do that with Solana or other things.
Speaker 1 I think it's going to be interesting how all of these things that sort of have developed over the last 10 years, how they kind of come back together in interesting ways to invent whatever that next future is going to be.
Speaker 3
That's super interesting. Yeah, I think a lot of people also talked about agentic permissioning and identity.
That's right. It's part of that too.
Speaker 3 Like, how do you actually embed identity on the blockchain?
Speaker 1 Well, and also, I mean, the whole question of identity is going to be really, really interesting because there are times where I might want you, the human, to be there and be okay with that.
Speaker 1 There might be times where I'm willing to be, you are the agent connected with the human to be there. And then there might be times I'm willing to let some sort of agent that is.
Speaker 1 self-directed be there and setting up kind of the differences in those those permissions is going to be is going to be really interesting browsers another place i mean everybody is building a browser right now And that's an interesting question for us.
Speaker 1 Like, how much do we lean into supporting that versus how much do we say, okay, that's actually just a way of leaking data that's back out?
Speaker 1 And so I think getting to some sort of way of saying, here's who I am
Speaker 1 as an agent. a bot, a browser.
Speaker 1 Here is what I have agreed I will do with whatever it is that I'm taking from you. And,
Speaker 1 you know, cryptographically signed, like
Speaker 1 I agree to these things.
Speaker 1 I think that
Speaker 1 it'll probably be the same fundamental infrastructure that regulates how bots access the web, that ends up regulating how browsers access the web.
Speaker 1
And a browser that takes data and then immediately feeds it back to an LLM might have more restricted access than one that doesn't. And that's going to be an interesting marketing.
That's interesting.
Speaker 3
That's super interesting. Yeah.
And I guess to the identity side,
Speaker 3 ZK is a very natural way to actually do a lot of really interesting, prove you have the right to a credential, but not show the credential, do other things that are complicated.
Speaker 3 And so the blockchain is perfect for that.
Speaker 1 It feels like all the building blocks for this have been coming for quite some time.
Speaker 1 And it wasn't until fairly recently where it felt like, oh, that starts to be the shape of how these blocks come together.
Speaker 1 But it feels like both from kind of the need of the change of the business model and through the fact that the technologies have matured to the point that they're starting to be able to handle these volumes and
Speaker 1 have the broad adoption to actually take off. Again,
Speaker 1 all the work of the last 10 years that, if you'd asked me even six months ago, how is this all going to come together?
Speaker 1 In the last few months, it's felt like, oh, now you can start to see the shape of what this future might look like.
Speaker 2 I think that's what we have time for. Thanks, Matthew.
Speaker 3 Thank you so much for joining us. Thanks for having me.
Speaker 2
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 2 That way, you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.