How Agentic AI is Transforming The Startup Landscape with Andrew Ng

42m
Andrew Ng has always been at the bleeding edge of fast-evolving AI technologies, founding companies and projects like Google Brain, AI Fund, and DeepLearning.AI. So he knows better than anyone that founders who operate the same way in 2025 as they did in 2022 are doing it wrong. Sarah Guo and Elad Gil sit down with Andrew Ng, the godfather of the AI revolution, to discuss the rise of agentic AI, and how the technology has changed everything from what makes a successful founder to the value of small teams. They talk about where future capability growth may come from, the potential for models to bootstrap themselves, and why Andrew doesn’t like the term “vibe coding.” Also, Andrew makes the case for why everybody in an organization—not just the engineers—should learn to code.

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @AndrewYNg

Chapters:

00:00 – Andrew Ng Introduction

00:32 – The Next Frontier for Capability Growth

01:29 – Andrew’s Definition of Agentic AI

02:44 – Obstacles to Building True Agents

06:09 – The Bleeding Edge of Agentic AI

08:12 – Will Models Bootstrap Themselves?

09:05 – Vibe Coding vs. AI Assisted Coding

09:56 – Is Vibe Coding Changing the Nature of Startups?

11:35 – Speeding Up Project Management

12:55 – The Evolution of the Successful Founder Profile

19:23 – Finding Great Product People

21:14 – Building for One User Profile vs. Many

22:47 – Requisites for Leaders and Teams in the AI Age

28:21 – The Value of Keeping Teams Small

32:13 – The Next Industry Transformations

34:04 – Future of Automation in Investing Firms and Incubators

37:39 – Technical People as First Time Founders

41:08– Broad Impact of AI Over the Next 5 Years

41:49 – Conclusion

Press play and read along

Runtime: 42m

Transcript

Speaker 1 Hi listeners, welcome back to No Priors. Today Lott and I are here with Andrew Ng.
Andrew is one of the godfathers of the AI revolution.

Speaker 1 He was the co-founder of Google Brain, Coursera, and the Venture Studio AI Fund. More recently, he coined the term agentic AI and joined the board of Amazon.

Speaker 1 Also, he was one of the very first people a decade ago to convince me that deep learning was the future. Welcome, Andrew.
Andrew, thank you so much for being with us.

Speaker 3 Always great to see you.

Speaker 1 I'm not sure where we should begin because you have such a broad view of these topics, but I feel like we should start with the biggest question, which is,

Speaker 1 you know, if you look forward at capability growth from here, where does it come from? Does it come from more scale? Does it come from data work?

Speaker 3 Multiple vectors of progress. So I think there is probably a little bit more juice out of the scalability lemon to be squeezed.

Speaker 3 So hopefully consumer power is there, but it's getting really, really difficult.

Speaker 3 Society's perception of AI has been very skewed by the PR machinery of a handful of companies with amazing PR capabilities.

Speaker 3 And because that number of companies drove scales and narrative, people think of scale first as a vector of progress.

Speaker 3 But I think, you know, agentic workflows, the way we build multimodal models, we have a lot of work to build concrete applications.

Speaker 3 I think there are multiple vectors of progress, as well as wildcards, like brand new technologies, like contusion models, which are used to generate images for the most part.

Speaker 3 Will that also work for generating text? I think that's exciting. So I think there'll be multiple ways of AI to make progress.

Speaker 1 You actually came up with the term agentic AI. What did you mean then?

Speaker 3 So when I decided to start talking agentic AI, which wasn't a thing when I started to use the term, my team was slightly annoyed at me.

Speaker 3 One of my team members I won't name, he actually said, Andrew, the world does not need you to make up another term. But I decided to do it anyway.
And for whatever reason, it stuck.

Speaker 3 And the reason I started to talk about agentic AI was because

Speaker 3 like a couple of years ago, I saw people will spend a lot of time debating, is this an agent? Is this not an agent? What is an agent? And I thought there's a lot of good work.

Speaker 3 And there was a a spectrum of degrees of agency, where there are highly autonomous agents that could plan, take multiple sets of using, do a lot of stuff by themselves.

Speaker 3 And then things that were lower degrees of agency where it would prompt an LM reflectance output.

Speaker 3 And I felt like rather than debating is this an agent or not, let's just say the degrees of agency and say it's all agentic to spend our time actually building this.

Speaker 3 So I started to push the term agentic AI.

Speaker 3 What I did not expect was that several months later, a bunch of marketers would get a hold of this term term and use it as a sticker to stick around everything in site.

Speaker 3 And so I think the term agentic AI really took off. I feel like the marketing hype has gone like that insanely fast.

Speaker 3 But the real business progress has also been rapidly growing, but maybe not as fast as the marketing.

Speaker 2 What do you think are the biggest obstacles right now to true agents actually being implemented as AI applications? Because to your point, I think we've been talking about it for a little while now.

Speaker 2 There are certain things that were missing initially that are now in place in terms of everything from certain forms of inference time compute on through to forms of memory and other things that allow you to maintain some sort of state against what you're doing.

Speaker 2 What do you view are the things that are still missing or need to get built or what will sort of foment progress on that end?

Speaker 3 I think at the technology component level, there's stuff that I hope will improve. For example, computer use kind of works, often doesn't work.
I think so the guide rails, evals is a huge problem.

Speaker 3 How do we quickly evaluate these things and drive evals? So I think the components, there's room for improvement.

Speaker 3 But what I see is the single biggest barrier to getting more agentic AI workflows implemented is actually talent.

Speaker 3 So when I look at the way many teams build agents, the single biggest differentiator that I see in the market is, does the team know how to drive a systematic error analysis process with evals?

Speaker 3 So you're building the agents by analyzing at any moment in time what's working, what's not working, what do you improve, as opposed to less experienced teams kind of try things in a more random way.

Speaker 3 This just takes a long time. And what I look for is a huge range of businesses, small and large, it feels like there's so much work that can be automated through agentic workflows.

Speaker 3 But the talent and skills and and maybe the software tooling i don't know just isn't there to drive that disciplined engineering process to get this stuff built how much of that engineering process could you imagine being automated with ai you know it turns out that a lot of this process of building agentic workflows it requires ingesting external knowledge which is often locked up in the heads of people so until and unless we build you know ai avatars you can interview employees doing the work and better visual ai they can look at the computer monitor I think maybe eventually, you know, but I think at least right now for the next year or two, I think there's a lot of work for human engineers to do to build more agentic workflows.

Speaker 2 So that's more the kind of collection of data, feedback, et cetera, for certain loops that people are doing? Is that

Speaker 2 other

Speaker 2 things that I'm sort of curious like what that translates into tangibly versus.

Speaker 3 Yeah, so maybe one example.

Speaker 3 So I see a lot of workflows like, you know, maybe a customer emails you a document, you're going to convert the document to text, then maybe do a web search for some compliance reason to see a working with a vendor you're not supposed to, and then look over database records, see the pricing rights, save it somewhere else, and so on.

Speaker 3 So multi-step agency workflows, kind of next-gen robotic process automation. So you implement this and it doesn't work.
You know, is it a problem?

Speaker 3 If you got the invoice date wrong, is that a problem or not? Or have you routed a message to the wrong person for verification? So

Speaker 3 when you implement these things, you know, almost always it doesn't work the first time, but then to know what's important for your business process and is it okay that, I don't know, I bothered the C of the company too many times or is the C of the company doesn't mind verifying some invoices.

Speaker 3 So all that external contextual knowledge, often, at least right now, I see thoughtful human product managers or human engineers having to just think through this and make these decisions.

Speaker 3 So can an AI agent do that someday? I don't know. Seems pretty difficult right now.
Maybe someday.

Speaker 1 But it's not in the internet pre-training data set and it's not in a manual that we can automatically extract.

Speaker 3 I feel like for a lot of work to be done building agentic workflows, that data set is proprietary.

Speaker 3 It's not general knowledge on the internet. So figuring that out, it's still exciting work to do.

Speaker 1 What is the, if you just look at the spectrum of agentic AI, what's the strongest example of agency you've seen?

Speaker 3 I feel like

Speaker 3 leading edge of agentic AI, I've been really impressed by some of the AI coding agents.

Speaker 3 So I think in terms of economic value, I feel like there are two very clear and very apparent buckets. One is answering people's questions.

Speaker 3 Probably, you know, OpenAI Chat GPT seems to remark the leader of that with

Speaker 3 real takeoff, liftoff, velocity. The second massive bucket of economic value is coding agents, where coding agents, like my personal favorite devotee right now is Clock Code.

Speaker 3 Maybe it'll change at some point, but

Speaker 3 I just use it, love it. Highly autonomous in terms of planning out what to do to build the software, building a checklist, going through it one at a time.

Speaker 3 So this ability to plan a multi-step thing, execute the multiple steps of the plan is one of the most highly autonomous agents out there being used that actually works.

Speaker 3 There's other stuff that I think doesn't work, like some of the computer used stuff, like, you know, go shop for something for me and browse online.

Speaker 3 Some of those things are really nice demos, but not yet production.

Speaker 2 I think that's because of some sort of criteria in terms of what needs to be done and more variability around actions, or do you think there's a better training set or sort of set of outputs for coding?

Speaker 2 I'm sort of curious, like, why does one work so well or almost feels magical at times and the others are, you know, really struggling as use cases so far?

Speaker 3 I think, you know, engineers

Speaker 3 really good at getting all sorts of stuff to work, but

Speaker 3 the economic value of coding is just. clear and apparent and massive.

Speaker 3 So I think the sheer amount of resources dedicated to this has led to a lot of smart people for whom they themselves are the user, so also good instant clone product, building really amazing coding agents.

Speaker 3 And then I think, I don't know.

Speaker 1 know you don't think it's a fundamental research challenge you think it's like capitalism at work and then domain knowledge in a lab oh i think capitalism is great at solving fundamental research problems yeah

Speaker 2 at what point do you think um models will effectively be bootstrapping themselves in terms of you know 99 of the code of a model will be written by agentic coding agents or the error analysis so

Speaker 3 I feel like we're,

Speaker 3 I sort of suspect we're slowly getting there. So some of the leading foundational model economies are clearly, well, they've said publicly, they're using AI to write a lot of code.

Speaker 3 One thing I find exciting is

Speaker 3 AI models using agentic workloads to generate data for the next generation of models.

Speaker 3 So I think the LAMA research paper talked about this, but older version of LAMAR would be used to think for a long time, to generate puzzles, that then you train the next generation of the model to try to solve really quickly without needing to think as long.

Speaker 3 So I find that exciting too.

Speaker 3 Yeah, multiple vectors of progress. It feels like AI is not just one way to make progress.
There's so many smart people pushing forward in so many different ways.

Speaker 1 I think you have rejected the term vibe coding in favor of AI-assisted coding. Like, what's the difference?

Speaker 3 You know,

Speaker 1 I'm assuming you do the latter. You're not vibing.

Speaker 3 Yeah. Vibe coding leads people to think, you know, like, I'm just going to go to vibes and accept all the changes that Chris has suggested or whatever.

Speaker 3 And it's fine that sometimes you could do that and it works. But I wish it was that easy.

Speaker 3 So when I'm coding for a day or for an afternoon i'm not like going with the vibes it's like a deeply intellectual exercise and i think the term vibe coding makes people think it's easier than it is so frankly after a day of using ai assisted coding like i'm exhausted mentally right so i think of it as rapid engineering where ai is letting us build serious systems build products much faster than ever before but it is you know engineering just done really rapidly do you think that's changing the nature of startups how many people you need how you build things how you approach things or do you think it's still the same old kind of approach, but you just have people that get more leverage because they have these tools now?

Speaker 3 So, you know, AI Found, we built startups, and it's really exciting to see how rapid engineering, AI assisted coding

Speaker 3 is changing the way we build startups. So there's so many things that, you know, would have taken a team of six engineers, like three months to build.

Speaker 3 That now, today, one of my friends or I was just building a weekend. And the fascinating thing I'm seeing is

Speaker 3 if we think about building a startup, the core loop of what we do, right? I want to build a product that users love.

Speaker 3 So the core iteration loop is write software, it's a software engineering work, and then the product managers maybe go do user testing, look at it, go by gut, whatever, to decide how to improve the product.

Speaker 3 So when we go look at this loop, the speed of coding is accelerating, the cost is falling. And so increasingly, the bottleneck is actually product management.

Speaker 3 So the product management bottleneck is now we can build what do we want much faster. Well, the bottleneck is deciding what do we actually want to build.

Speaker 3 And previously, if it took you, say, three weeks to build a prototype, if you need a week to get user feedback, it's fine.

Speaker 3 But if you can now build a prototype in a day, then boy, if you have to wait a week for user feedback, that's really painful. So I find my teams, frankly, increasingly relying on gut because

Speaker 3 we go and collect a lot of data that informs our very human mental model, our brain's mental model of what the user wants.

Speaker 3 And then we often, you know, have to have deep customer empathies and just make product decisions like that, right? Really, really fast in order to drive progress.

Speaker 2 Have you seen anything that actually automates some aspects of that? I know that there have been

Speaker 2 some versions of things where people, for example, are trying to generate market research by having a series of bots kind of react in real time.

Speaker 2 And that almost forms your market or your user base as a simulated environment of users. Have you seen any tool like that work or take off? Or do you think that's coming?

Speaker 2 Or do you think that's too hard to do?

Speaker 3 Yeah, so there's a bunch of tools to try to speed up product management.

Speaker 3 I feel like,

Speaker 3 well, the recent FakeMod IPO is one great example of design, AI, Heidi and AI Dylan did a great job.

Speaker 3 Then there are these tools that are trying to use AI to help interview prospective users. And as you say, we looked at some of the scientific papers on using a flock of AI agents to simulate.

Speaker 3 your group of users and how to calibrate that. It all feels promising and early and hopefully wildly exciting in the future.

Speaker 3 I don't think those tools are accelerating product managers nearly as much as coding tools are accelerating software engineers.

Speaker 3 So this does create more of the bottleneck on the product management side.

Speaker 1 It does make sense to me that my partner Mike has this idea that I think is broadly applicable in a couple different ways of like computers can now interrogate humans at scale.

Speaker 1 And so there's companies like Listen Labs working on this for like consumer research type tasks, right? But you could also use it to

Speaker 1 understand tasks for training. or for

Speaker 1 the data collection piece that you described. When you think about your teams that are in this iteration loop, has like the founder profile that makes sense changed over time?

Speaker 3 To me, there are so many things that the world used to do in 2022 that just do not work in 2025. So in fact, often

Speaker 3 I ask myself, is there anything we're doing that today that we're also doing in 2022?

Speaker 3 And if so, let's take a look and see if it still even makes sense today because a lot of stuff, a lot of workflows in 2020 don't make sense today. So I think today, the technology is moving so fast.

Speaker 3 Founders that are on top of Gene AI technology, thus, you know, tech-oriented product leaders, I think are much more likely to succeed than someone that maybe is more business-oriented, more business-savvy, but doesn't have a good feel for where AI is going.

Speaker 3 I think unless you have a good feel for what the technology cannot do, it's really difficult to think about strategy whether to lead the company. We believe this too.
Yeah, cool. Yeah, yeah, yeah.

Speaker 2 I think that's like old school Silicon Valley even.

Speaker 2 Like if you, if you look at Gates or Steve Jobs slash Wozniak or a lot of the really early pioneers of the semiconductor, computer, early internet era, they were all highly technical.

Speaker 2 And so I almost feel like we kind of lost that for a little bit of time. And now it's very clear that you need technical leaders for technology companies.

Speaker 3 I think we used to think, oh, you know, they've had one exit before, so, or two exits even, so let's just back that founder again.

Speaker 3 But I think if that founder has stayed on top of AI, then that's fantastic.

Speaker 3 But if, you know, and I think part of it is in moments of technological disruption, which AI rapidly changing, that's the rare knowledge. So actually take mobile technology.

Speaker 3 You know, like everyone kind of knows what a mobile phone can and cannot do, right? What a mobile app is, GPS, all that. Everyone kind of knows that.

Speaker 3 So you don't need to be very technical to have a gut for, can I build a mobile app for that? But AI is changing so rapidly. What could you do with voice act? What engineering workflows do?

Speaker 3 How rapidly foundation models, what was the reasoning model? So having that knowledge is a much bigger differentiator, whereas knowing what a mobile app can do to build a mobile app.

Speaker 2 It's an interesting point because when I look at the biggest mobile apps, they were all started by engineers. So WhatsApp was started by an engineer.
Instagram was started by an engineer.

Speaker 2 I think Travis at Uber was technical-ish.

Speaker 1 Technically adjacent.

Speaker 2 Technical adjacent. Instacart at Porva was an engineer at Amazon.

Speaker 3 Yeah, and Travis had the insight that... GPS enabled a new thing.
But so you had to be one of the people that saw GPS on mobile coming early to go and do that.

Speaker 1 Yeah, you have to be like really aware of the capabilities.

Speaker 2 Yeah,

Speaker 2 the technology. Yeah, it's super interesting.
What other characteristics do you think are common? I know people have been talking about, for example,

Speaker 2 it almost felt like there was an era where being hardworking was kind of poo-pooed. Or do you think founders have to work hard?

Speaker 3 Do you think people who succeed were?

Speaker 2 I'm just sort of curious, like aggression, hours work, like what else may correlate or not correlate in your mind?

Speaker 3 You know, I work very hard. There are periods in my life where...
you know, I encourage others that want to have a great career have an impact, like work hard.

Speaker 3 But even now, I feel like a little bit of nervous is saying that because in some parts of society it's considered not politically correct to say well working hard probably correlates to your personal success um i think it's just a reality yeah i know that not everyone at every point in their life is in the time where they work hard you know when when my kids were first born that week i did not work very hard it was fine right so acknowledging that not everyone is in circumstances and work hard just a factual reality is people that work hard accomplish a lot more.

Speaker 3 But of course, you need to respect people that aren't in a face where they...

Speaker 1 Yeah, I'd say something maybe maybe a little less correct, which is

Speaker 1 less politically correct, which is like, I think there was an era where people thought like there was a statement that startups are for everyone. And I do not believe that's true.
Right.

Speaker 1 I think like, you know, you're trying to do a very unreasonable thing, which is like create a lot of value impacting people very quickly.

Speaker 1 And when you're trying to do an unreasonable thing, you probably have to work pretty hard. Right.
And so I think people, I think that got very,

Speaker 1 the sort of work ethic required to like move the needle in the world very quickly disappeared.

Speaker 3 Yeah.

Speaker 3 there's a, was it hold?

Speaker 3 I wish I remember who said this, but was it the only people that would change the world are the ones crazy enough to think they can?

Speaker 3 I think it does take someone with the boldness, the decisiveness to go and say, you know what, this is saved the world. I'm going to take a shot at changing it.

Speaker 3 And there's only people with that conviction

Speaker 3 that I think can do this.

Speaker 2 It strikes me as being true in any endeavor. You know, I used to work as a biologist and I think it's true in biology.
I think it's true in technology.

Speaker 2 I think it's true in almost every field that I've seen is it's the people who work really hard do very well.

Speaker 2 And then in startups, at least the thing I tended to forget for a while was just how important competitiveness or people who really wanted to compete and win mattered.

Speaker 2 And sometimes people come across as really low-key, but they still have that drive and that urge. And they want to be the ones who are the winners.
And so I think that matters.

Speaker 2 And similarly, that was kind of put aside for a little bit, at least

Speaker 2 from a societal perspective relative to companies.

Speaker 3 Actually, I feel like I've seen two types. One is they really want their business to win.
That's fine. Some do great.
Some are they really want their customers to win.

Speaker 3 And they're so obsessed with serving the customer that that works out.

Speaker 3 I used to say early days of Coursera, you know, yes, I knew about competition, blah, blah, blah.

Speaker 3 But I was really obsessed with, you know, learners, with the customers, and that drove a lot of my behaviors.

Speaker 2 No, that's a really good framework.

Speaker 2 And when I say competition, I don't mean necessarily with other companies, but it's almost like with whatever metric you set for yourself or whatever thing you want to win at or be the best at.

Speaker 3 What I found is in a software environment, you just got to make so many decisions every day. You just have to go by gut a lot of the time, right?

Speaker 3 I feel like, you know, building a startup feels more like playing tennis than solving calculus problems. Like you just don't have the time to think, just make a decision.
And I feel like,

Speaker 3 so this is why people that obsess day and night with the customer, with the company, think really deeply and have that conceptual knowledge that when someone says, do I ship product feature A or feature B, like you just got to know a lot of the time, not always.

Speaker 3 And it turns out there are so many, to use Jeb Beze's term, like two-way doors in startups, because frankly, you know, you have very little to lose. So just make a decision.

Speaker 3 It is wrong, change it a week later. It's fine.

Speaker 3 So I find, but to be really decisive, move really fast, you need to have obsessed usually about the customer, maybe the technology, to have that state of knowledge to make really rapid decisions and still be right most of the time.

Speaker 2 How do you think about that bottleneck in terms of product management that you mentioned or people who have good product instincts?

Speaker 2 Because I was talking to one of the best known sort of tech public company CEOs and his view was that in all of Silicon Valley or in all of tech kind of globally, there's probably a few hundred at most great product people.

Speaker 2 Do you think that's true? Or do you think there's a broader swath of people who are very capable at it? And then how do you find those people?

Speaker 2 Because I think that's actually a very rare skill set in terms of the people who are, you know, just like there's a 10x engineer, there's 10x product insights, it feels.

Speaker 3 Boy, that's a great question.

Speaker 3 I feel it's got to be more than a few hundred great product people. Maybe just as I think there are way more than a few hundred great AI people.

Speaker 3 Well, I think there are, but I think one thing I find is very difficult is that user empathy or that customer empathy because, you know, to form a model of the user or the customer, there's so many sources of data.

Speaker 3 You know, you run surveys, you talk to a handful of people, you read market reports, you look at people's behavior on other parallel or computing apps or whatever. But there's so many sources of data.

Speaker 3 But to take all this data and then to get out of your own head to form a mental model for what your right maybe ideal customer profile or some some user you want to serve uh would think and act so you can very quickly make decisions serve them better that human empathy one of my failures one of the things i did not do well at an early phase of my career um for for some dumb reason i tried to make a bunch of engineers product managers i gave them product manager training and i found that I just foolishly made a bunch of really good engineers feel bad for not being good product managers, right?

Speaker 3 But I found that one correlate for whether someone would have good product instincts is that very high human empathy where you can synthesize lots of signals to really put yourself in the person's shoes to then very rapidly make product decisions on how to serve them.

Speaker 1 You know, going back to coding assistants, it's really interesting. I think it is like reasonably well known that the

Speaker 1 cursor team, like they make their decisions actually very instinctively versus spending a lot of time talking to users.

Speaker 1 And I think that makes sense if you are the user and then like your mental model of yourself and what you want is actually applicable to a lot of people. And similarly,

Speaker 1 I think

Speaker 1 these things change all the time, but I don't think Cloud Code incorporates, despite scale of usage, feedback data today

Speaker 1 from a training loop perspective. And I think that surprises people because it is really just what do we think the product should be at this stage.

Speaker 3 So it turns out one advantage that startups have is

Speaker 3 while you're early, you can serve kind of one user profile.

Speaker 3 Today, if you're, I don't know, like Google, right?

Speaker 3 Google serves such a diverse set of user personas, you really have to think about a lot of different user personas, and that adds complexity as the product changes.

Speaker 3 But when you're startup trying to get your initial wage in the market, you know, if you pick even one human that is representative enough of a broad set of users and you just build a product for one user that you have or one ideal customer profile, one hypothetical hypothetical person, then you should actually go quite far.

Speaker 3 And I think that for some of these businesses, be it CRUSE or Claude Cole or something, if they have internally a mental picture of a user that's close enough to very large perspective users,

Speaker 3 that you actually go really far that way.

Speaker 1 The other thing that I've observed, and I'm curious if you guys see this in some of our companies, is just like the floor is lava, right? The ground is changing in terms of capability all the time.

Speaker 1 And the competition is also very fierce in the categories that are already obviously important and have multiple players.

Speaker 1 So leaders who were really effective in companies a generation ago are not necessarily that effective when recruited

Speaker 1 to these companies as they're scaling, like because the pace of

Speaker 1 velocity of operation or the pace of change. It's interesting to see you say, I'm looking at what I was doing in like today and in 2022 and saying like, is that still right?

Speaker 1 Versus if you're an engineering leader or a go-to-market leader and you've built your career being really great at how that's done, that may not be applicable anymore.

Speaker 3 I think it's a challenge for a lot of people. I know many great leaders in lots of different functions still doing things the way they were in 2022.
And I think

Speaker 3 it's just got to change. When new technology comes, I mean, you know, once upon a time, there was no such thing as web search today.

Speaker 3 Would you hire anyone for any role that doesn't know how to search the web?

Speaker 3 And I think we're well past the point that for a lot of job roles, if you can't use OMs in an effective way, you're just much less effective than someone that can and it turns out um everyone in my team at ai5 knows how to code for everyone is a good hub account and i see for a lot of my team members you know when my i don't know um uh assistant general counsel or my cfo or my front desk operator when they learn how to code they're not software engineers but they do their job function better because by learning the language of computers they can now tell a computer more precisely what they want to do for them the computer do it for them and this makes them more effective at their job function i think the rapid pace of change is uh disconcerting to a lot of people uh but i guess i don't know i i feel like when the world is moving at this pace we just have to change at the world at the pace in the world yeah i've seen that uh to your point show up in um hires particularly around product so uh or product and design so one sort of later stage ai company i'm involved with they were doing a search for somebody to run product and somebody to run design.

Speaker 2 And in both cases, they selected for people who really understood how to use some of the vibe coding slash ai assisted coding tools because they're they they said your point it's like you can prototype something so rapidly and if you can't even just mock it up really quickly to show what it could look like or feel like or do in a very simple way you're wasting an enormous amount of time talking and writing up the product requirements document and everything else and so i do think there's a shift in terms of how do you even think about what processes do you use to develop a product or even pitch it right like what should you show up with to a meeting when you're talking about a product the whole thing

Speaker 3 completely changed yeah no you should you should have a prototype in some cases actually just give an example a research interview engineers for a row and hired a interview someone with about 10 years of experience you know full stack very good resume also interviewed a fresh college drat um but the difference was the person with 10 years of experience had not used AI tools much at all fresh college drive had and my assessment was the fresh college drat the new ai would be much more productive and i decided to hire them instead turned out to be a great decision now the flip side of this is the best engineers i work with today are not fresh college drats they're people with you know 10, 15 or more years of experience, but they're also really on top of AI tools.

Speaker 3 And those engineers are just completing a cost of their own.

Speaker 3 So I feel like I actually think software engineering is a harbinger of what will happen in other disciplines because the tools are most advanced in software engineering.

Speaker 2 It's interesting. One company that I guess both of us are involved with is called Harvey.
And I led their Series B. And when I did that, I called a bunch of their customers.

Speaker 2 And the thing that was most interesting to me about some of those customer calls was, because the legal is notorious as being a tough profession for adopting new technology, right?

Speaker 2 There aren't a dozen great legal software companies. Those customers that I called, which were big law firms or people who were, you know, quite far along in terms of adopting Harvey,

Speaker 2 they all thought this was the future. They all thought that AI was really going to matter for their vertical.

Speaker 2 And the main thing they would raise is questions like, in a world where this is ubiquitous, suddenly instead of hiring 100 associates, I only hire 10.

Speaker 2 And how do I think about future partners and who to promote if I don't have a big pool? And so I thought that mindset shift was really interesting.

Speaker 2 And to your point, I feel like it's percolating into all these markets or industries and it's sort of slowly happening.

Speaker 2 But as industry by industry, people are starting to rethink aspects of their business in really interesting ways. And it'll take a decade, two decades for this transformation to happen.

Speaker 2 But it's compelling to kind of see how people, like the earliest adopting verticals, and something that the people were thinking deepest about it.

Speaker 3 That should be really interesting. I think, yeah, I always have a legal startup Callus' AI, the AI Fund Help Builds.
It's doing very well as well.

Speaker 3 I think

Speaker 3 the nature of work in the future would be very interesting. So I feel like a lot of teams wound up outsourcing a lot of work, partly because of the costs.

Speaker 3 But with AI and AI assistance, part of me wonders, is a really small, really skilled team

Speaker 3 with lots of AI tools, is that going to outperform a much larger

Speaker 3 and maybe lower cost team that may or may not be.

Speaker 1 And they have less coordination cost. Yeah.

Speaker 3 So actually, so some of the most productive teams I'm on, you know, that I'm a part of now is some of the smallest teams than

Speaker 3 very small teams of really good engineers with lots of AI enablement and very low coordination costs because we're all together in person. So we'll see how the world evolves.

Speaker 3 Too early to make a call, but you can see where I'm maybe thinking the world may or may not be headed.

Speaker 1 I work with several teams now,

Speaker 1 one of which is called Open Evidence and has like a pretty good penetration, like 50% of doctors in the US now, where it's an explicit objective in the company to try to be as small as possible as they grow impact.

Speaker 1 And, you know, we'll see where these companies land because, you know, there's lots of functions that need to grow in a company over time. But that certainly wasn't an objective for like.

Speaker 3 I've heard that objective a lot.

Speaker 2 I've actually heard that objective a lot in the 2010s. And there's a bunch of companies that I actually think underhired.

Speaker 2 pretty dramatically or stayed profitable and would brag about being profitable, but gross wasn't as strong as it could be. So I actually feel like that's a trap.

Speaker 1 How would you calibrate

Speaker 3 this? Yeah.

Speaker 2 It's basically really,

Speaker 2 it's almost are you being

Speaker 2 lackadaisical or too accepting of the progress that your company's making? Because it's going just fine. It could be going much better, but it's still going great on a relative basis.

Speaker 2 And so you're like, oh, I'll keep the team small. I'll be super lean.
I won't spend any money. Look at me, how profitable I am.
And sometimes it's amazing, right?

Speaker 2 Capital efficiency is great, but sometimes you're actually missing the opportunity. You're not going as fast as you can.

Speaker 2 And usually, I think what happens is in the early stage of a startup life, you're competing with other startups. And And if you're way ahead, it feels great.

Speaker 2 But eventually, if there are incumbents in your market, they come in.

Speaker 2 And the faster you capture the market and move up market, the less time you give them to sort of realize what's going on and catch on.

Speaker 2 And so often five, six, seven years in the life of a startup, you're actually competing with incumbents suddenly and they just kill you with distribution or other things.

Speaker 2 And so I think people really missed the mark. And you could argue that was kind of Slack versus Teams.

Speaker 2 That was, you know, there's a few companies I won't name, but I feel like they're so proud of their profitability and they kind of blew up. I guess on the design side, that was sketch, right?

Speaker 3 Remember?

Speaker 2 Bohemian coding yeah you know they they were based in the netherlands they were super happy they were profitable they were doing great and then the figma wave kind of came do you think your companies stay this small

Speaker 1 do you think your teams stay this small do i think my teams stay this small

Speaker 1 what do you mean in terms of just efficiency of like can can you actually get to you know affect millions and billions of people with 10 50 100 person teams i think teams can definitely be smaller now than they used to be but uh are we overinvesting around investing and then also i think to your point,

Speaker 3 the analysis of market dynamics, right? If

Speaker 3 there's like a win-and-take-all market, then the incentives just got to go. Yeah, it's got to move.

Speaker 2 Minecraft, I think, when it sold to Microsoft was how many people, like five people or something?

Speaker 2 And it sold for a few billion dollars and it was massively used. I think people forget all these examples, right? It's just this, oh, suddenly you can do things really lean.

Speaker 2 You could always do something, things lean before. The real question is how much leverage did you have in headcount? How did you distribute? What did you actually need to invest money behind?

Speaker 2 And then I would almost argue that one of the reasons small teams are so efficient with AI is because small teams are efficient in general. You didn't hire 30 extra crufty people who get in the way.

Speaker 2 And I think often people do that. If you look at the big tech companies, for example, right now, many, not all of them, but many of them could probably shrink by 70% and be more effective.

Speaker 2 And so I do think people also forget the fact that A, there's AI efficiency, B, there's sort of

Speaker 2 high-value capital being arbitraged into markets that normally wouldn't have them. Legal is a good example.
Great engineers didn't want to work in legal. Now they do because of things like Harvey and

Speaker 2 healthcare, which again, suddenly you have these great people showing up. But I think also the other part of it is just small teams tend to be more effective.

Speaker 2 And AI helps you argue other reasons to keep teams highly small and performant, which I think is kind of under discussion.

Speaker 3 I feel like one of the

Speaker 3 reasons why that AI instinct is so important. I remember one week had two conversations with two different team members.

Speaker 3 One person came to me to say, hey, Andrew, I'm going to do this. Can you give me some more headcount to do this? I said no.

Speaker 3 Later that week, I think independently, someone else, very similar, said, hey, Andrew, can you give me some budget to hire AI to do this?

Speaker 3 Yes. And so that realization is you hire AI, not a lot more humans for this.
You just gotta have those instincts. Yeah, that's very interesting.

Speaker 1 If you think of what's happening, so happening in software engineering as the harbinger for like the next industry transformations, you spend a lot of time investing at the application level or like building things there.

Speaker 1 What do you think is next? Or what do you want to be next?

Speaker 3 I feel like there's a lot of at the tooling level, I feel like. I'd actually prefer a ranked list

Speaker 3 for all investing in this stuff. You know,

Speaker 3 that's actually one thing I find really interesting, which is where economists doing all the studies on what are the jobs, you know, at highest risk of AI disruption.

Speaker 3 I think you're skeptical. I actually look at them sometimes for inspiration for where we should find ideas to build projects.
One of my friends, Eric Renderson, right,

Speaker 3 he and his company work heads, which are involved in it. Very insightful in the nature of the.
Yeah, I liked him. Yeah, good.
Good.

Speaker 3 So, so i find talking to that sometimes useful although actually one of the lessons i've learned though is uh in lieu of top-down market analysis i think ai when the target rich environment just so many ideas that no one's worked on yet because the technical is so new so one thing i've learned is um ai fun we have an obsession with speed all my life i've always had an obsession with speed but now we have tools to go even faster than we could and so one of the lessons i've learned is um we really like concrete ideas.

Speaker 3 So if someone says, I did a market analysis, AI will transform healthcare. That's true, but I don't know what to do with that.

Speaker 3 But if someone, a subject matter expert or an engineer comes and says, I have an idea, look at this part of healthcare operations and drive efficiency and all this, they go, okay, great.

Speaker 3 That's a concrete idea. I don't know if it's a good idea or a bad idea, but it's concrete.
At least we could very efficiently figure out do customers want this?

Speaker 3 Is this technically feasible and get going? So I find that AI fun.

Speaker 3 When we're trying to decide what to build, we spin a long list of ideas to try to select a small number that we want to go forward on.

Speaker 3 We don't like looking at ideas that are not concrete.

Speaker 1 What do you think investing firms or incubation studios like yours will not do two years from now? Like not do manually, sorry?

Speaker 3 I think there's a lot could be automated, but the question is, what are the tasks we should be automating? So for example, you know, we don't make follow-on decisions that often, right?

Speaker 3 Because of portfolio, of some dozens of companies. So do we need to fully automate that probably not? Because we're very look at it, very hard to automate.

Speaker 3 I feel like doing deep research on individual companies and competitive research that seems right for automation. So

Speaker 3 I don't know. I personally use whether I'm OpenAISD researcher and other deep researcher types of tools a lot to just do at least the cursory market research things.

Speaker 3 LP reporting, that is a massive amount of paperwork that maybe we could simplify.

Speaker 1 Yeah, I'm taking the strategy of general avoidance.

Speaker 1 Besides, you know, basic compliance.

Speaker 1 You know, one of my partners uh bella she worked at bridgewater before uh where they had like an internal effort to take a chunk of capital and then try to disrupt what bridgewater was doing with ai um and it's like you know macro investing it's a very different style but i think uh but i think it probably gives us some indications where the human judgment piece of our business i think is not obvious like does an entrepreneur have the qualities that we're looking for when you know your resume on paper or your github or you know what minor work history you have when you're a new grad, it's not very indicative.

Speaker 1 And so people have other ideas of doing this. I know investors that are like, you know, looking at recordings of meetings with entrepreneurs and seeing if they can get some signal out of like

Speaker 1 communication style, for example. But I think that part is very hard.
I do think you can be like programmatic about looking at materials, for example, and like ranking

Speaker 1 quality of teams overall.

Speaker 3 There's actually one thing I feel like our AI models are getting really intelligent, but does it send the places where humans still have a huge advantage of AI?

Speaker 3 It is often if the human has additional context that for whatever reason, the AI model can't get at. And it could be things like...

Speaker 3 meeting the founder and sussing out their, you know, just how they are as a person and the leadership qualities, the communication or whatever.

Speaker 3 And those things, maybe reviewing video, maybe eventually we can get that context in the AI model.

Speaker 3 but i find that all these things like as humans you know we do a background reference check and someone makes an offhand comment that we catch that that affects a decision then how does the ai model get this information especially when you know a friend will talk to me but they don't really talk to my ai model so i find that there are a lot of these tasks where humans have a huge information advantage still because they've not figured out the plumbing or whatever is needed to get information to the ai model the other thing i think is like very durable is um things that rely on a relationship advantage, right?

Speaker 1 If I'm convincing somebody to work at one of my companies and they worked at a previous company and they trust me because of it or whatever reason, like, you know, all the information in the world about why this is a good opportunity isn't the same thing as me being like, Sally, you got to do this.

Speaker 1 It's going to work. It remains to be seen whether or not company building is actually that correlated with investment returns.
But I do think that that side of it feels harder to fully automate.

Speaker 3 Yeah. Yeah.
Yeah. No, yeah.
Yeah. I think like trust because people know and you know, people do trust you.
I trust you, right? Because you can say so many things, it's very easy to lose trust.

Speaker 3 So that makes sense.

Speaker 3 Yeah. Actually, one thing I'm trying to give you a take on is

Speaker 3 we increasingly see

Speaker 3 highly technical people try to be first-time founders, you know, set up the processes to set up first-time founders, to learn all the hard lessons and all the craziness needed to be a successful founder.

Speaker 3 I spent a lot of time thinking through that, how to set up founders for success when they have 80% of the skills needed to be really great, but there's another just a little bit that we can help them with.

Speaker 1 That's a very manual process.

Speaker 2 I don't sweat it. You don't sweat it.
I just view it as like a mix of peer groups.

Speaker 2 Like, can you surround people with other people who are either similar or one or two steps ahead of them on the founding journey? And then the second thing is complementary hire.

Speaker 2 I think in general, one of my big learnings is, I feel like early in careers, people try to complement or try to build out the skills that they don't have.

Speaker 2 In late in careers, they lean into what they're really good at and then they hire people to do the rest.

Speaker 2 And so if the company's working, I think you just hire people. Like Bill Gates would notoriously talk about his COO was always the person he'd learned the most off of.

Speaker 2 And then once he had a certain level of scale, he'd hire his next COO. I see.
And so it must be through that lens for founders.

Speaker 3 Yeah. Confidentialize makes sense.

Speaker 2 But I think the best way to learn something is to do it. And so that, therefore, just go, you know, you'll screw it up.
It's fine. As long as it's not existential to the business, who cares?

Speaker 2 So I tend to be very lackadaisical.

Speaker 1 I probably think too many things are existential for companies.

Speaker 2 Yeah, it's something. It's like, do you have customers and are you building product?

Speaker 3 Most of them, yeah. Are you building a product that users love, right?

Speaker 3 And of course, go-to-market is important, and all that is important, but you solve for the product first, then usually sometimes you can figure out the rest too.

Speaker 2 I agree with that most of the time, but not always. Yeah, I think there's lots of counterexamples, but yeah, I generally agree with you.

Speaker 3 No, yeah, sometimes you can build a sucky product and have a sales channel you can force it through, but I'd rather not, that's not my default. I don't know what that either.

Speaker 3 It does work.

Speaker 2 There's a lot of really bad technology that goes on these big companies right now.

Speaker 1 Okay, if you have these,

Speaker 1 you know, first-time, very technical founders with gaps in their knowledge or skill set

Speaker 1 being like the core profile of folks you're backing again, like, do you augment them somehow? Like, what's what helps them when they begin?

Speaker 3 I think a lot of things.

Speaker 3 Essentially, one thing I realized that, you know, at venture firms, venture studios, we do so many reps that we just see a lot that even repeat founders have only done like twice in their life or even once or twice in their life.

Speaker 3 So I find that when my firm sits alongside the founders and shares their instincts on, you know, when do we get custom AP back faster? Are you really on top of the latest technology trends?

Speaker 3 How do you just speed things up?

Speaker 3 How do you fundraise? Most people don't fundraise that much in their lives, right? Most founders just do it handful of times.

Speaker 3 That helps even very good founders with things that because of what we do, we've had more reps at. And then I think

Speaker 3 hiring others around them, peer group. I know these are things that you guys do.

Speaker 3 I think there's a lot we could do. It turns out even the best founders need hope.

Speaker 3 So hopefully, you know, VCs, venture studios, who can provide that to great founders.

Speaker 1 A lot's wiser about this than I am. I mean, I can't help myself, but like want to specifically try to upskill founders on a few things they have to be able to do, like recruiting, right? But

Speaker 1 I would agree that the higher leverage path is absolutely like you can put people around yourself to do this and to learn it on the job. Last question for you.

Speaker 1 What do you believe about the broad impact of AI over the next five years? Do you think most people don't?

Speaker 3 I think many people will be much more empowered and much more capable in a few years than they are today.

Speaker 3 And the capability of individuals is probably, of those that embrace AI will probably be far greater than most people realize.

Speaker 3 Two years ago, who would have realized that software engineers would be as productive as they are today when they embrace AI?

Speaker 3 I think in the future, people have all sorts of job functions and also for personal tasks.

Speaker 3 I think people who embracers will just be so much more powerful and so much more capable than they probably even imagine.

Speaker 3 Awesome.

Speaker 1 Thanks, Andrew. Thanks for joining us.

Speaker 3 Thanks a lot. Thanks, Aaron.

Speaker 1 Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.

Speaker 1 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.