The Best of 2025 (So Far) with Sarah Guo and Elad Gil

18m
2025 has thus far been a year of great leaps and advances in AI technology. And Sarah and Elad have spoken with some of the most enterprising founders and scientific minds in the field of AI today. So we’re revisiting a few of our favorite conversations on No Priors so far in 2025 – Winston Weinberg (Harvey), Dr. Fei-Fei Li (World Labs), Brendan Foody (Mercor), Dan Hendrycks (Center for AI Safety), Noubar Afeyan (Flagship Pioneering), Brandon McKinzie and Eric Mitchell (OpenAI o3), Isa Fulford (OpenAI), Arvind Jain (Glen), and Dr. Shiv Rao (Abridge).

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil

Chapters:

00:00 – Episode Introduction

0:21 – Winston Weinberg on Leaning into New Capabilities

02:01 – Dr. Fei-Fei Li on Spatial Intelligence

04:13 – Brendan Foody on AI Disruption in the Workforce

06:10 – Dan Hendrycks on the Geopolitics of Superintelligence

08:06 – Noubar Afeyan on Entrepreneurship

10:38 – Brandon McKinzie and Eric Mitchell on Reasoning Models

12:41 – Isa Fulford on Training Deep Research

13:49 – Arvind Jain on Innovating Enterprise Search

16:21 – Dr. Shiv Rao on AI’s Human Impact

18:58 – Conclusion

Press play and read along

Runtime: 18m

Transcript

Speaker 1 2025 has been another remarkable year in AI. This week on No Priors, we're sharing our favorite moments from the podcast from the year so far.

Speaker 1 We've talked to visionary leaders at Harvey, OpenAI, Glean, Abridge, and more. We also talked to legends of science like Dr.
Fei Fei Li and Nubara Feyen.

Speaker 1 But first, let's start with a moment that captures the magic of leaning into new capabilities at the right time.

Speaker 1 Harvey CEO Winston Weiberg discovered an extraordinary opportunity hidden in plain sight.

Speaker 2 Gabe and I actually had met a couple years before and I definitely didn't know anything about the startup world and didn't have a plan of doing a startup.

Speaker 2 And what had happened was he showed me GBD-3, which at the time was public.

Speaker 2 And I was, first of all, just incredibly surprised that no one was talking about GBD-3 and no one was using it in any way, shape, or form.

Speaker 2 And he showed me that, and I showed him kind of my legal workflows.

Speaker 2 And we started the kind of aha moment was we went on r/slash legal advice, which is basically, you know, a subreddit where people ask a bunch of legal questions.

Speaker 2 And almost every single answer is, so who do I sue?

Speaker 4 Almost every single time.

Speaker 2 And we took about 100 landlord-tenant questions and we came up with kind of some chain of thought prompts.

Speaker 2 And this is before, you know, anyone was talking about chain of thought or anything like that. And we applied it to those landlord-tenant questions and we gave it to three landlord-tenant attorneys.

Speaker 2 And we just said, nothing about AI. We just said, here is a question that a potential client asked, and here is an answer.
Would you send this answer without any edits to that client?

Speaker 4 Would you be fine with that? You know, is that ethical?

Speaker 2 Is it a good enough answer to send? And 86 out of 100 was yes.

Speaker 2 And actually, we cold emailed the general counsel of OpenAI and we sent him these results. And his response basically was, oh, I had no idea the models were this good at legal.

Speaker 2 And we met with the C-suite of OpenAI a couple weeks after.

Speaker 1 Now, from legal reasoning to spatial intelligence. The legendary Dr.
Fei Fei Li opened our eyes to an entirely different dimension of AI capability.

Speaker 6 I think from a neural and cognitive science point of view, that spatial intelligence is a really hard problem that evolution has to solve for animals.

Speaker 6 And what's really interesting is I think animals have solved it to an extent, but not fully solved it. It's one of the hardest problems because, um, what is the problem animal has to solve?

Speaker 6 Animals have to evolve the capability of collecting lights in something,

Speaker 6 which we call eyes mostly. And then, with that collection of eyes, it has to reconstruct a 3D world in their mind somehow so that they can navigate and they can do things.

Speaker 6 And of course, they can interact.

Speaker 6 For humans we're the most capable animal in terms of manipulation we can do a lot of things and all this is spatial intelligence to me that's um that's just rooted in in our intelligence what is interesting is it's not a fully solved problem even in animals we uh for example uh for humans right um

Speaker 6 If I ask you to close your eyes right now and draw out or build a 3D model of the environment around you. It's not that easy.

Speaker 6 We don't have that much capability to generate extremely complicated 3D model till we get trained.

Speaker 6 You know, there are some of us, whether they're architects or designers or just people with a lot of training and a lot of talent. And

Speaker 6 that's a hard thing to do. And imagine you do it at your fingertip much more easily and allow

Speaker 6 much

Speaker 3 more

Speaker 6 fluid interactivity and editability. That would just be a whole different world for people, no pun intended.

Speaker 1 Data is the beast feeding the AI train. And thus Mercor's CEO, Brendan Futi, is working with major AI labs on how to build what's next.

Speaker 1 He gives a clear prediction about what's coming for the workforce.

Speaker 8 I think displacement in a lot of roles is going to happen very quickly and it's going to be very painful

Speaker 8 and a large political problem. Like I think we're going to have a big populist movement around this and all the displacement that's going to happen.

Speaker 8 But one of the most important problems in the economy is figuring out how to respond to that, right?

Speaker 8 Like how do we figure out what everyone who's working in customer support or recruiting should be doing in a few years? How do we reallocate wealth

Speaker 8 once we approach super intelligence,

Speaker 8 especially if the value and gains of that are more of a power law distribution?

Speaker 8 And so I spend a lot of time thinking about like how that's going to play out.

Speaker 8 And I think it's really at the heart of it.

Speaker 3 What do you think happens eventually?

Speaker 9 X percent of people get displaced from like color work.

Speaker 10 What do you think they do?

Speaker 8 I think there's going to be a lot more of the physical world. I think that there's also going to be a lot that of like niche.

Speaker 3 What does the physical world mean?

Speaker 8 well it could be everything ranging from people that are creating robotics data to people that are waiters at restaurants or um or are just like therapists because people want like human interaction uh like whatever that looks like i think all of i think that automation in the physical world is going to happen a lot slower than what's happening in the digital world just because of so many of the like self-reinforcing

Speaker 8 gains and a lot of self-improvement that can that can happen in the virtual world, but not physical one.

Speaker 1 Which brings us to one of the biggest questions of our time. How do we navigate the geopolitical implications of superintelligence?

Speaker 1 Dan Hendricks, the director of the Center for AI Safety, has an answer.

Speaker 5 Let's think of what happened in nuclear strategy. Basically,

Speaker 5 a lot of states deterred each other from doing a first strike because they could then retaliate. So they had a shared vulnerability.

Speaker 5 So they were like, we're not going to do this really aggressive action of trying to make a bid to wipe you out because that will end up causing us to be damaged.

Speaker 5 And we have a somewhat similar situation later on when AI is more salient, when it is viewed as pivotal to the future of a nation, when people are on the verge of making a superintelligence more, when they can, say, automate pretty much all AI research.

Speaker 5 I think states would try to deter each other from trying to leverage that to develop it into something like a super weapon that would allow the other countries to be crushed or use those AIs to do

Speaker 5 some really rapid, automated AI research and development loop that could have it bootstrapped from its current levels to something that's super intelligent, vastly more capable than any other system out there.

Speaker 5 I think that later on, it becomes so destabilizing that China just says, we're going to do something preemptive, like do a cyber attack on your data center. And the U.S.
might do that to China.

Speaker 5 And Russia, coming out of Ukraine, will reassess the situation,

Speaker 5 get situational awareness, think, oh, what's going on with the U.S.

Speaker 4 and China?

Speaker 5 Oh, my goodness, they're so head on AI. AI is looking like a big deal.
Let's say it's later in the year when a big chunk of software engineering is starting to be impacted by AI.

Speaker 5 Oh, wow, this is looking pretty relevant. Hey, if you try and use this to crush us, we will prevent that by doing a cyber attack on you.

Speaker 5 And we will keep tabs on your projects because it's pretty easy for them to do that espionage.

Speaker 1 Nubara Fayan has been thinking about how biotech gets built and how to change the game for three decades. His breakthroughs have impacted global health.

Speaker 1 He's the founder and CEO of Flagship Pioneering and the co-founder of Moderna. He wants to make entrepreneurship a scientific effort, not a random one.
And he thinks AI can help.

Speaker 11 The motivation for flagship stems from what I was doing before, which was that I started a company in 1987 when 24-year-old immigrants didn't start companies in this country, but instead it was kind of like former Merck senior executives or IBM senior executives were the only ones who were entrusted with the massive amounts of venture capital, namely $2,3 million per round used to go into venture capital.

Speaker 11 So this was very early days. And I had the kind of chance.
opportunity to start a company right out of my graduate school and ended up raising quite a bit of venture money and eventually

Speaker 11 kind of went down a path of entrepreneurship.

Speaker 11 Along the way, one of the things that interested me was why it is that kind of the entrepreneurial process was supposed to be random, improvisational, kind of idiosyncratic, almost emotional, gamey.

Speaker 11 All of those things I kind of thought was a bit of a put off

Speaker 11 when it comes to actually doing things in a serious professional way. And I kind of used to go around in the very early 90s saying, why isn't entrepreneurship a profession?

Speaker 11 And if it was going to be a profession, how could it be a profession?

Speaker 6 What do you mean by gaming?

Speaker 11 Because it's like supposed to fail most of the time. And once in a while, you win and then you celebrate the win.
And what I mean is like it, it's random.

Speaker 11 But not only random, but there's like winners and losers and keeping score. I don't know.
It's maybe the wrong word, but I just mean like people even call gamification in the

Speaker 11 software space. There is a version of this.
Like I don't mind being playful because if you're overly serious, sometimes you miss things, but it can't just all be play.

Speaker 11 We take hard-earned money, we deploy it to do things that are damn near impossible. Once in a while, we reduce them to practice so they become not only possible, but valuable.

Speaker 11 And yet, people treat it like, oh, well, you know, it didn't work. There's 20 different things we tried.
One of them worked.

Speaker 11 That, I don't know, as an engineer by background, as a scientist, I just thought that what we do, especially, listen, in healthcare, especially in climate, especially in kind of like agriculture, food security, you can't think of this as, you know, like shots on goal and this and that.

Speaker 11 You've got to kind of say, hey, we can get better at this.

Speaker 1 Reasoning is the biggest paradigm shift in AI architecture since the Transformer. Brandon McKinsey and Eric Mitchell from OpenAI explained a crucial insight about reasoning models.

Speaker 3 I can give maybe very concrete cases for like the visual reasoning side of things.

Speaker 3 There's a lot of cases where, and back to also the model being able to estimate its own uncertainty, you'll give it some kind of question about an image and the model will very transparently tell you in a strain of thought, like, I don't know, I can't really see the thing you're talking about very well.

Speaker 3 Or like, it almost knows that its vision is not very good.

Speaker 5 And

Speaker 3 what's kind of magical is like when you give it access to a tool, it's like, okay, well, I got to figure something out.

Speaker 3 Let's see if I can manipulate the image or crop around here or something like this. And what that means is that

Speaker 3 it's like much more productive use of tokens as it's doing that. And so your test time scaling slope goes from something like this to something much deeper.

Speaker 3 And we've seen exactly that.

Speaker 3 The test time scaling slopes without tool use and with tool use for visual reasoning specifically are very noticeably different.

Speaker 12 Yeah, I was also saying for like writing code for something like

Speaker 12 there are a lot of things that an LLM could try to figure out on its own, but would require a lot of

Speaker 12 attempts and self-verification that you could write a very simple program to do in like a verifiable and

Speaker 12 much faster way. So

Speaker 12 I do some research on this company and use this type of valuation model to tell me

Speaker 12 what the valuation should be.

Speaker 12 You could have the model try to crank through that and fit those coefficients or whatever in its context. Or you could literally just have it write the code to just do it.

Speaker 12 the right way and just know what the actual answer is. And so

Speaker 12 yeah, I think like part of this is you can just allocate compute a lot more efficiently because you can defer stuff that the model doesn't have comparative advantage to doing to a tool that is like really well suited to doing that thing.

Speaker 1 Sometimes the most profound moments in AI development aren't the grand theoretical breakthroughs. They're based on taste, data generation, and grinding work.

Speaker 1 The visceral experience of watching something you hoped would work actually come to life. Issa Fulford from Open AI captures that moment perfectly.

Speaker 1 Here, she's describing the training that went into deep research.

Speaker 7 It really was one of those things where we thought that, you know, training on browsing tasks would work. You know, we felt like we had good conviction in it.

Speaker 4 But actually,

Speaker 7 the first time you train a model on a new data set using this algorithm and seeing it actually working and playing with the model was pretty incredible, even though we thought it would work.

Speaker 7 So honestly, just that it worked.

Speaker 7 so well was pretty surprising

Speaker 7 even though we thought it would if that makes sense yeah yeah it's the it's the wrist roll experience of like, oh, the path is paved with strawberries or whatever.

Speaker 13 Exactly.

Speaker 7 But then sometimes some of the things that it fails at are also surprising.

Speaker 7 Like sometimes it will make a mistake where it will do such smart things and then make a mistake where I'm just thinking, why are you doing that?

Speaker 13 Like, stop. So I think there's definitely a lot of room for improvement.
But yeah, we've been impressed with the model so far.

Speaker 1 One of the biggest surprises of AI and a core principle for us here at Conviction is how it can make bad markets suddenly good ones. The right technology can meet the right moment in unexpected ways.

Speaker 1 Arvin Jane built Glean and what everyone said was a graveyard market, enterprise search.

Speaker 9 It was like a graveyard, like, you know, of all these companies that tried to solve the problem and it didn't.

Speaker 4 Part of it was just that I think search is a hard problem.

Speaker 9 In an enterprise, like even getting access to all the data that you want to search was such a big problem. In the pre-SaaS world,

Speaker 9 there was no way to sort of go into those data centers, figure out where the servers were, where the storage systems were, try to connect with information in them.

Speaker 9 It was a big challenge. So SaaS actually solved that issue.
So like search products, like most of them, most of those companies started in the pre-SaaS world.

Speaker 9 They failed because you could just couldn't build a turnkey product. But SaaS actually allowed you to actually build something,

Speaker 9 which is my insight was that like, look, the enterprise world has changed. We have these SaaS systems now.
And SaaS systems don't have versions. Like everybody, all customers have the same version.

Speaker 9 and you know they are open they're interoperable you can actually hit them with apis and get all the content I felt that the biggest problem was actually solved, which was that I could actually easily go and bring all the enterprise information and data in one place and build this unified search system on top.

Speaker 3 So that was actually a big unlock.

Speaker 9 And by the way, the origins of Glean is, so at Rubrik, you know, we had this problem. Like, you know, we grew fast.

Speaker 9 We had a lot of information across 300 different SaaS systems and nobody could find anything in the company. And people were complaining about it in North Pulse surveys.

Speaker 9 And I, and I was, you know, I always run ID in my startups.

Speaker 9 And so there's a complaint that, you know, that came to me, like, i had to solve it so i tried to buy a search product and i realized there's nothing to buy i mean that's that's really the origins of how green got started as a company and so that was like you know one big issue like you know the so sash made it easy for to actually connect you know your enterprise data and knowledge to a search system so that actually made it possible for us to for the very first time build a turnkey product but there are a lot of other advances as well you know one is you know like look you know businesses have so much information and data one interesting you know fact so one of our largest customers they have more than 1 billion documents inside their company.

Speaker 9 Now hear this, when Elad and I, when we were working on search at Google, in 2004, the entire internet was actually 1 billion documents. There's a massive explosion of content inside businesses.

Speaker 9 So you have to build scalable systems and you couldn't build like a system like that before in the pre-cloud era.

Speaker 1 Perhaps no story captures the human impact of this AI moment and its potential better than what's happening in healthcare. Here's Shiv Rao, CEO and founder of Abridge.

Speaker 10 It's pretty heroic in general for a doctor to give you feedback like, hey, this sucked and you got to do better. Or like,

Speaker 10 you didn't recognize the way I said this medication. Or I'm a gastroenterologist and I would never, you know, sequence my problems in my assessment and plan section of my note this way.

Speaker 10 It doesn't serve me well and makes me look like terrible as a doctor or whatever. We get that feedback.
We love it. It's oxygen.
But then we also get the feedback that's like, hey, this is amazing.

Speaker 10 And I'm not going to retire anymore. And I've got like years, decades left in my career now, thanks to this technology.

Speaker 10 But in this channel, love stories, all of that feedback, that positive feedback, we just get it like programmatically funneled.

Speaker 10 So any one of our people inside of the company can always go into that channel. And it's like purpose, you know, it's like fulfillment immediately.

Speaker 10 Like you immediately understand why we're all working so hard and why it makes sense.

Speaker 10 Because like being on this very telephone pole like journey these last couple years is obviously like it's new for so many of us.

Speaker 10 And we're all kind of building new muscles, but it's a lot of pressure. But this is my favorite bit of feedback.

Speaker 10 So, this love story comes from a doctor at Tanner Health, which is a rural health system.

Speaker 10 And she wrote to us, she wrote, I was sitting at dinner last week, and my son asked me, Mommy, why aren't you working right now?

Speaker 10 I literally took my phone out and explained to him that a bridge is a new tool that lets mommy come home early and eat dinner with her family.

Speaker 10 I started to tear up and looked over at my husband, who then said, Mommy's going to be able to eat dinner with us every night now. And we get feedback like that, like every day, you know?

Speaker 10 And so like there's, there's dopamine hits, you know, in hypergrowth. And like those are awesome.
But I think that they get us through like sprints. But I think it's the oxytocin hits like this.

Speaker 10 It's the purpose. It's the fulfillment.
It's like, that's, I think, what I think we're really after in this company. And so like everybody's mission driven out there, but I think this mission,

Speaker 10 like it hits me at least a little bit differently.

Speaker 1 These conversations remind us that we're living through a hinge moment in history. Stay tuned as we have more conversations with the builders and thinkers leading the way for the rest of the year.

Speaker 1 If you like what we're doing, leave us a review on Apple Podcasts or Spotify. Comment on YouTube or let us know who we should have with a guest.
Thanks for listening.

Speaker 1 Find us on Twitter at nopriors pod. Subscribe to our YouTube channel if you want to see our faces.

Speaker 1 Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no-priors.com.