First Time Founders with Ed Elson – This Physicist Is Building AI Droids

58m
Ed speaks with Matan Grinberg, co-founder and CEO of Factory, an AI company focused on bringing autonomy to software engineering. They discuss the long-term future of AI, the role of regulation, and whether or not he’s concerned about an AI bubble.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Press play and read along

Runtime: 58m

Transcript

Speaker 1 Support for the show comes from Saks Fifth Avenue.

Speaker 2 Saks Fifth Avenue makes it easy to holiday your way.

Speaker 2 Whether it's finding the right gift or the right outfit, Saks is where you can find everything from a stunning David Yerman bracelet for her or a sleek pair of ferragama loafers to wear to a fancy holiday dinner.

Speaker 2 And if you don't know where to start, Saks.com is customized to your personal style so you can save time shopping and spend more time just enjoying the holidays.

Speaker 2 Make shopping fun and easy this season, and find gifts and inspiration to suit your holiday style at Saks Fifth Avenue.

Speaker 3 So give them the power to make standout content with Adobe Express.

Speaker 3 Brand kits make following design rules a breeze, and Adobe quality templates make it easy to create pro-looking flyers, social posts, presentations, and more.

Speaker 3 You don't have to be a designer to edit campaigns, resize ads, and translate content. Anyone can in a click.
And collaboration tools put feedback right where you need it.

Speaker 3 See how you can turn your team into a content machine with Adobe Express, the quick and easy app to create on-brand content. Learn more at adobe.com/slash express slash business.

Speaker 4 What do walking 10,000 steps every day, eating five servings of fruits and veggies, and getting eight hours of sleep have in common?

Speaker 5 They're all healthy choices. But do all healthier choices really pay off?

Speaker 4 With prescription plans from CVS CareMark, they do. Their plan designs give your members more choice, which gives your members more ways to get on, stay on, and manage their meds.

Speaker 4 And that helps your business control your costs because healthier members are better for business. Go to cmk.co/slash access to learn more about helping your members stay adherent.

Speaker 4 That's cmk.co/slash access.

Speaker 7 Welcome to First Time Founders. I'm Ed Elson.
$7.5 billion.

Speaker 7 That is how much money has poured into AI coding startups in just the past three months. And it's not that hard to see why.

Speaker 7 Across the industry, developers are embracing generative AI to speed up their work. It's efficient, it's impressive, but it's still under the careful watch of human engineers.

Speaker 7 Well, my next guest wondered if AI could do more. What if it could handle routine tasks like debugging or migrations on its own? What if it could be autonomous?

Speaker 7 To turn that idea into reality, he launched an AI startup, which uses agents to handle the mundane work that developers would rather skip.

Speaker 7 With a $50 million investment from Sequoia, JP Morgan, and Nvidia, his company is reshaping the future of software development.

Speaker 7 This is my conversation with Matan Grinberg, co-founder and CEO of Factory. All right, Matan Grinberg, thank you for joining me.

Speaker 8 Thank you for having me. How are you?

Speaker 7 I'm good. We should probably start off by saying

Speaker 7 we go way back.

Speaker 8 We do indeed. We're friends from college.

Speaker 7 I knew you back in college. I knew you when you were studying physics.
You were a budding physicist.

Speaker 7 I mean, just for those listening, Matan was basically the smartest guy I knew

Speaker 7 in college. And then you go on and you're, I know you were getting your PhD in physics.
And then eventually you tell me,

Speaker 7 no, I'm actually starting an AI company. And now here you are and you're running one of these top AI agentic startups,

Speaker 7 figuring out how to automate coding.

Speaker 7 Let's just start with like, how do we get here? How do we go from Princeton, physics, going to be a physicist? And then now you're an AI person.

Speaker 8 Yeah. So obviously that was not the arc that I think I was expecting either.

Speaker 8 Probably goes back to eighth grade, which is why I got into physics in the first place.

Speaker 8 Spite is a very big motivator for me. And in eighth grade, my geometry teacher told me to retake geometry in high school.
And I was like, screw that. Like, what? Like, I'm good at math.

Speaker 8 I don't need to do that.

Speaker 8 And so in the summer between eighth and ninth grade, my first order on Amazon ever was textbooks for algebra two, trigonometry, pre-calculus, calculus one, two, three, differential equations.

Speaker 8 A true nude.

Speaker 7 Yeah, exactly.

Speaker 8 And so I spent the whole summer studying those textbooks.

Speaker 8 And going into freshman year of high school, I took an exam to pass out of every single one of those classes. So I had credit for all of them.

Speaker 8 And then I went to my dad and I was like, what's the hardest math? And

Speaker 8 he said, string theory, which is actually physics. It's not math.
And I was like, okay, I'm going to be a string theorist.

Speaker 8 And then basically for the next like 10 years of my life, that was all I really cared about.

Speaker 8 I didn't really pay attention much to anything about like finance, entrepreneurship, like anything like that.

Speaker 8 Went to Princeton because it was great for physics, then did a master's in the UK, came to Berkeley to do the PhD.

Speaker 8 And at Berkeley, it finally dawned on me: wait a minute, I was just studying for 10 years, like 10-dimensional black holes and quantum field theory and all this stuff.

Speaker 8 Originally, because of this like spite, and obviously I came to love it, but I realized that I didn't really want to spend my entire life doing that.

Speaker 8 Taking 10 years to realize that is a little bit slow, but

Speaker 8 I had a bit of an existential crisis of, you know, like, what is it? What should I do?

Speaker 8 Almost joined Jane Street in a classic, like, ex-physicist. Like, what should I do? Decided not to because I feel like that's the thing.

Speaker 8 Like, you know, once you go there, you kind of don't move on from that. So ended up taking some classes at Berkeley in AI.
Really fell in love in particular with what was called program synthesis.

Speaker 8 Now they call it code generation.

Speaker 8 And the math from physics made it such that like jumping into the AI side was relatively straightforward.

Speaker 8 Did that for about a year and then realized that the best way to pursue co-generation was not through academic research, but through starting a company.

Speaker 8 And so then the question was like, okay, well, I know nothing about entrepreneurship. I've been a physicist for 10 years.
What should I do?

Speaker 8 And

Speaker 8 this was

Speaker 8 just after COVID, but I remember on YouTube in my recommended algorithm, I saw a podcast on Zoom with this guy whose name I remembered from a paper that I wrote. at Princeton.

Speaker 8 This guy used to be a string theorist, but it was a podcast and it was like

Speaker 8 Sequoia Investor talks, you know, everything from like crypto to physics. And I was like, what the hell is this?

Speaker 8 And I remember watching the interview, and the guy seemed relatively normal, like had social skills, which is rare for someone, which is rare for someone who had published in string theory.

Speaker 7 What was the other interesting thing about you is you're kind of a social person who's also this physics genius, which again is quite rare.

Speaker 7 So you found someone in common.

Speaker 8 Yeah. So, so found someone who was like, okay, you know, maybe there was someone else who has this similar background.

Speaker 8 And I remembered the name correctly.

Speaker 8 And so I looked him up and saw that he was a string theorist who ended up, you know, getting his degree, then joining Google Ventures, being one of the first checks into Stripe, then one of the first checks into like SpaceX.

Speaker 8 On the way, he had built and sold a company for a billion dollars to Palo Alto Networks. And I was just like, this is an insane trajectory.
So sent him a cold email.

Speaker 8 And I was just like, hey, I'm a Ton. I studied physics at Princeton, wrote a paper with this guy named Juan Maldasena, who's like a very famous string theorist.

Speaker 8 And I was like, would love to talk.

Speaker 8 And

Speaker 8 that day, he immediately replied and was like, hey, come down to our office in Menlo Park. Let's chat.
What was supposed to be a 30-minute meeting ends up being a three-hour walk.

Speaker 8 And we walked from Sand Hill all the way to Stanford campus and then back. And funny enough, on the...

Speaker 8 On the walk, so we realized that we had a lot of like very similar reasons for getting into physics in the first place, similar reasons for wanting to leave as well.

Speaker 8 And this was in April of 2023. So just after the Silicon Valley Bank crisis and also very soon after the Elon Twitter acquisition.
And after the conversation,

Speaker 8 he was basically like, Matan, you should 100% drop out of your PhD and you should either join Twitter right now, because if you voluntarily go to Twitter of all times now, that's just badass.

Speaker 8 It looks great, you know, on a resume, or you should start a company. And I knew what the answer was, but didn't want to like corrupt what was an incredible meeting.

Speaker 8 So I was like, okay, thank you so much. I'm going to go think about it.

Speaker 7 The good advice for meetings, don't give your answer right away.

Speaker 8 Yeah.

Speaker 7 Take some time, come back.

Speaker 8 Yeah.

Speaker 8 And so crazy thing, the next day I go to a hackathon in San Francisco.

Speaker 8 In this hackathon, I run into Eno. We recognized each other at this hackathon.
We're like, oh, hey, like, you know, I remember you.

Speaker 8 We ended up chatting and realizing that we were both really obsessed with. coding for AI.
And then that day we started working on what would become factory. He had a job at the time.

Speaker 8 I was a PhD student, so I could spend whatever time I wanted on it.

Speaker 8 And over the next 48 hours, we built the demo for what would become factory.

Speaker 8 Called up Sean and I was like, hey, I was thinking about what you said. I have a demo I want to show you.
And so we got on a Zoom. I showed it to him.
He was like, this is all right.

Speaker 8 And I was like, all right. Like, I think this is pretty sick.
Like, I don't know. And he's like, okay, would you work on it full time? And I was like, yeah, 100%.

Speaker 8 And he was like, okay, drop out of your PhD and send me a screenshot.

Speaker 8 And I was just like, fuck it. Okay.
So go to

Speaker 8 go to the like Berkeley portal, like fully unenroll and withdraw.

Speaker 8 Didn't tell my parents, obviously.

Speaker 8 Send him a screenshot. And he's like, okay, you have a meeting with the Sequoia Partnership tomorrow morning.
Like be ready to present. Wow.

Speaker 7 So now backed by Sequoia, you just raised your Series B.

Speaker 7 You are one of the top AI coding startups, but there are a lot of AI coding companies. We spoke with one a while ago, which was Codium, which eventually became Windsurf.

Speaker 7 It got folded into Google in this kind of controversial situation. Point being, there are people who are doing this.

Speaker 7 What makes Factory different? What made it different from the get-go and what makes it different now?

Speaker 8 Our mission from when we first started is actually the exact same that it is today, which is to bring autonomy to software engineering.

Speaker 8 I think when we first started in April of 2023, we were very early.

Speaker 8 And what I've come to realize is that, and this is kind of a a little bit of a trite statement, but being early is the same as being wrong.

Speaker 8 And we were wrong early on in that the foundation models were not good enough to fully have autonomous software development agents.

Speaker 8 And so in the early days, I think the important things that we were doing was building out an absolutely killer team, which we do have.

Speaker 8 And everyone that we started with is still here, which has been incredible. and having a deeper sense of how developers are going to adopt these tools.
So that was kind of in the early days.

Speaker 8 And I think something that we learned that still to this day, I don't really see any other companies focus on is the fact that coding is not the most important part of software development.

Speaker 8 In fact, as a company gets larger and as the number of engineers in a company grows, the amount of time that any given engineer spends on coding goes down because there's all this organizational molasses of like needing to do documentation and design reviews and meetings and approvals and code review and testing.

Speaker 8 And so the stuff that developers actually enjoy doing, namely the coding, is actually what you get to spend spend less time on.

Speaker 8 And then these companies emerging saying, hey, we're going to automate that one little thing that you sometimes get to do that you enjoy. You don't get to do that anymore.

Speaker 8 So your life as a developer is just going to be reviewing code or documenting code, which it just, I think, really misses the mark on what developers in the enterprise actually care about.

Speaker 8 And I think the reason why this happens is because a lot of these companies have, like in their composition,

Speaker 8 the best engineers in the world graduating from, you know, the greatest schools and they join startups. And at a startup, if you're an engineer, all you do is code.

Speaker 8 And so there's kind of this mismatch in terms of empathy of what the life of a developer is.

Speaker 8 Because if you're a developer at one of these hot startups, yes, coding, speed that up, great. But if you're a developer at some 50,000 engineer org, coding is not your bottleneck.

Speaker 8 Your bottleneck are all these other things. And with us focusing on that full end-to-end spectrum of software development, we end up kind of hitting more closely to what developers actually want.

Speaker 7 Microsoft, I know, I think Satya Nadella said something like 30% of code at Microsoft is being written by AI right now.

Speaker 8 I think Zuckerberg said that he's shooting for, I think, half of the code at Meta to be written by AI.

Speaker 7 You're basically saying what software developers want is not for someone to be doing the creative part, but they want someone or an agent or an AI to be doing the boring drudge work.

Speaker 7 What does that drudge work work actually look like? You said sort of reviewing code, documenting code. In what sense is factory

Speaker 7 addressing that issue?

Speaker 8 Even the idea of like 30% of code is AI written, I think is a very non-trivial metric to calculate because if you have AI generated, like 10 lines, and you manually go adjust two of them, do those two count as AI generated or not?

Speaker 8 So there's some gray there.

Speaker 7 You think that they're kind of just throwing numbers out there a little bit?

Speaker 8 It's just a very hard, it's hard to calculate. And so even even if you were trying to be as rigorous as possible, I don't know how you come up with a very concrete number there.

Speaker 8 But regardless, I think that directionally it's correct that the number of the number of lines of code that's AI generated is strictly increasing.

Speaker 8 The way the factory helps, so I guess generally like a software development lifecycle, very high level looks like.

Speaker 8 First understanding, right? So you're trying to figure out what is the like lay of the land of our current code base, let's say, or our current product.

Speaker 8 Then you're going to have some planning of whether it's like a migration that we want to do or a feature or some customer issue that we want to fix.

Speaker 8 Then you're going to plan it out, create some design doc. You're going to go and implement it.
So you're going to write the code for it.

Speaker 8 Then you're going to generate some tests to verify that it is passing some criteria that you have. There's going to be some human review.
So they're going to check to make sure that this looks good.

Speaker 8 And then you might update your documentation and then you kind of push it into production and monitor to make sure that it doesn't break.

Speaker 8 In an enterprise, all of those steps take a really, really long time because there's, you know, the larger your org, if it's 30 years old, there are all these different interdependencies.

Speaker 8 And like, imagine you're a bank and you want to ship a new feature to your like mobile app. There are so many different interdependencies that any given change will affect.

Speaker 8 So then you need to have meetings and you need to have.

Speaker 8 approvals from this person and this person needs to find the subject matter expert for this part of the code base and it ends up taking months and months.

Speaker 8 And so where factory helps is a lot of the things that don't seem like the highest leverage leverage are what they spend a lot of time on.

Speaker 8 So like that testing part or the review process or the documentation or even the initial understanding.

Speaker 8 I cannot tell you how many customers of ours have a situation where there was like one expert who's been there for 30 years who just retired.

Speaker 8 And so now there's like literally no one who understands a certain part of their code base.

Speaker 8 And so getting some new engineer to go in and do that, there's no documentation.

Speaker 8 So now that engineer has to spend six months writing out docs for this like legacy code base, which is, you know, engineers spend years of their lives becoming experts.

Speaker 8 The highest leverage use of their time is not writing documentation on existing parts of the code base.

Speaker 8 In this world where like an org has factory fully deployed, that engineer can just send off an agent. Our agents are called droids.
So they send off a droid to go and generate those docs,

Speaker 8 ask it questions, get the insight as if it was a subject matter expert that's been there for 20 years so that he can go and say, okay, here's how we're going to design a solution.

Speaker 8 Here's how we're going to fix whatever issue is at hand.

Speaker 7 These droids, your agents that you call droids, I think one of the big big differentiators that I've seen is that they are fully autonomous. They're doing it basically everything on their own.

Speaker 7 In contrast to something like Copilot, which is by definition working alongside you to help you figure things out, you guys are saying, no, these things can be completely on their own, totally autonomous.

Speaker 7 Literally, you've got robots just doing the work for you. Why is that the way to go with AI?

Speaker 8 At a high level, so this is true for code, but I would also say for knowledge work more broadly but for code in particular we're going from a world where developers wrote 100% of their code to a world where developers will eventually write 0% of it and we're basically changing the primitive of software development from like writing lines of code writing functions writing files to the new primitive being a delegation like delegating a task to an agent and so the new kind of

Speaker 8 important thing to consider is, you know, you can delegate a task, but if it's very poorly scoped, the agent will probably not satisfy whatever criteria you had in your head.

Speaker 8 And so if this new primitive is delegation, your job as a developer is to get good at how can I very clearly define what success looks like, what I need to get done, what the testing it should do, like

Speaker 8 what our organization's contributing guidelines are, let's say.

Speaker 8 And so with this as the new primitive, The job of the developer is now, okay, if I set up the right guidelines and I tell this agent to go, it now has the information it needs to succeed on its own.

Speaker 8 And this is very similar to like human engineer onboarding. Like when you onboard a human engineer into your organization, what do you do? You don't just throw them into the code base.

Speaker 8 You'll say, hey, here's what we've built so far. Here's how we build things going forward.
Here's our process for deciding on what features to build. Here's our coding standards.

Speaker 8 So you have like a long onboarding process. Then you give them a laptop.
So they can actually go and write the code and test it and run it and mess around with it before they actually submit it.

Speaker 8 And so we need to do similar things with agents where we give them this thorough onboarding process.

Speaker 8 You give it an environment where it can actually test the code and mess around with the code to see if it's working.

Speaker 8 And having that laptop, now it has this like autonomous loop that it can go through where it tries out some code, runs it. Oh, that failed.
Let me go iterate based on that.

Speaker 8 Now we do have not like fully autonomous droids, but the point is that giving people access to this, they can set up droids to fully generate all of their docs for them.

Speaker 8 So now as an engineer, that's just something you don't need to worry about because that's not the highest leverage use of your time.

Speaker 8 Thinking about instead this behavior change towards delegation, that's like the kind of biggest thing that we work with enterprises on.

Speaker 7 I think delegation is the right word, but it's also kind of a scary word because delegation

Speaker 7 implies, I mean, the way that we work today, you delegate to other people

Speaker 7 whose jobs are to do all of the things that you're describing. There are some companies that say AI is going to be your partner and work alongside you.

Speaker 7 You're saying, actually, no, this is just going to do the work, i.e.,

Speaker 7 it would replace people. And this is obviously a big debate in AI, the automation debate.
What happens to the four and a half million software engineers?

Speaker 7 What is your viewpoint on this automation debate and the idea that AI is going to take your job?

Speaker 8 At a high level, I will say AI will not replace human engineers. Human engineers who know how to use AI will replace human engineers who don't.

Speaker 8 And I think the reason AI will not replace human engineers is because basically

Speaker 8 there's like a bar for how big a problem needs to be in order for it to like be economically viable for someone to implement a software solution to it.

Speaker 8 And suppose it used to be a billion dollars and then slowly it's gone down to $100 million or $10 million.

Speaker 8 Like these are like the TAMs of the problem that makes it economically worthwhile to build up a team of software engineers to work on a problem. What AI does is it lowers that bar.

Speaker 8 So now in a world where before, you could only economically, viably solve a problem that's worth $10 million. Now maybe it's $100,000.

Speaker 8 Now maybe it's like large enterprises can actually make a lot of custom software for any given customer of theirs. It means that the leverage of each software developer goes up.

Speaker 8 It does not mean that the number of software engineers go down. It would mean that if there was only one company in the world that had access to AI.

Speaker 8 Because then they have access to AI, they can use AI while no one else does. And now they have way more leverage.
So they can beat their competitors while having less humans.

Speaker 8 But the reality is now is if they're two companies and they're competing, one has a thousand engineers, the other has a thousand engineers, they both get AI.

Speaker 8 So now they have the equivalent output of 100,000 engineers. They're not going to start firing engineers because now one company is going to be way more productive than the other.

Speaker 8 They'll deliver a better product, better solution, lower cost to their customers, and then they're going to succeed.

Speaker 8 So then this other company is going to be incentivized then to have more engineers, right? Yes.

Speaker 8 So I think that's one side of it. I think the other is like

Speaker 8 we have

Speaker 8 really bad prediction on what we can do with these tools. Because right now, like humanity has only seen what loosely 100,000 software engineers working together can build.

Speaker 8 That might be like, let's say, the cloud providers. Those are some of like the largest engineering orgs, something that took, let's say, 100,000 engineers to build.

Speaker 8 We don't even know what the equivalent of 10 million human software engineers could build.

Speaker 8 Like we can't even conceive of like what software is so intricate and complicated that it would take that many engineers to build. And I kind of refuse to believe that 100,000 is the limit.

Speaker 8 There's no interesting software after that.

Speaker 7 Yeah, I'm really glad you brought up the point of the danger here is

Speaker 7 that one company would own all of the AI. Like the problem isn't value creation.

Speaker 7 I mean, what we're describing is technology bringing the costs down and therefore creating more incentives to build, more value creation,

Speaker 7 which can only be a good thing unless it is in some way hijacked and you don't have a system of capitalism where companies are really competing with each other and forcing each other to iterate.

Speaker 7 And also that includes many different players who can participate in that value creation. And when I look at the AI space right now.

Speaker 7 Just as an example, when we interviewed and spoke with what is now Windsurf,

Speaker 7 and I asked the founders this question of like, how do you compete with big tech?

Speaker 7 And they explained how they're going to do it and how they're going to take big tech on. And then what do you know, Google buys them.

Speaker 7 And I look at the same thing with like scale AI, which was, you know, one of the biggest AI startups. Alexander Wang was this incredible thought leader.
And then what do you know?

Speaker 7 He gets, I mean, they get an investment, which turns into he's now an employee at Meta. And now he's building, you know, like Meta social media AI.

Speaker 7 And it all seems as though AI is being kind of overridden or taken over by big tech through these investments, which then turn into these sort of aqua hires.

Speaker 7 And it does make me concerned that all of the power and all of the value is accruing into one place. And it's the same players that we've had over the past 20 years.
So how do you think about that?

Speaker 7 Do you think about this possibility that maybe big tech comes in and says, we need your software, we need your people, we're going to acquire you.

Speaker 7 And do you worry about that concentration of power in AI?

Speaker 8 I think it's a very like top of mind thing for people is like, even from the investor side, is it going to be the incumbents that win or will it be, you know, insurgents or however you want to, you know, the startups that can come and kind of claw their way into like surviving without without acquisition?

Speaker 8 I think the answer is always founder and company dependent. Like I think some examples that come to mind are like Airbnb and Uber.

Speaker 8 These are companies where there wasn't a very obvious gap in the market such that anyone could start a company like Airbnb or Uber and just succeed.

Speaker 8 Like I think it took a lot of very intentional and very relentless work in the face of tons of adversity to actually make those companies viable and successful.

Speaker 8 And I think

Speaker 8 in a lot of these cases, it is the choice of the founders or the companies to either continue or proceed to joining

Speaker 8 big tech. And I think at the end of the day, it really does depend on how relentless are you willing to be to actually fight that fight? Because I think both of those acquisitions were optional.

Speaker 8 Like I don't think they were like back against the wall, had no other choice.

Speaker 8 I think it was like, for whatever reason, and I don't know the exact details of either of these situations, but it was like, you know what, based on the journey so far, let's elect to do this.

Speaker 7 Presumably because they were offered so much money. I mean, when I look at Meta hiring all of these AI geniuses, and I assume this is probably a concern for factory and many other AI startups.

Speaker 7 What if Meta just hires our people? And I wonder if it's because these companies are so dominant, they have so much money that they're like, here, here's a billion dollars.

Speaker 7 And then it's hard to say no to it.

Speaker 8 Totally. Yeah.
But I think if you like went back in time and you offered like, let's say, Travis Kalnick a lot of money. He'd say no.
Yeah. Because he was, he was relentless.

Speaker 8 Like that was the mission. And I think similarly at Factory, we are super focused on people that are very mission driven.
If you want to make a ridiculous amount of money, you can go to Meta.

Speaker 8 You can go to one of those places. The people who have joined our team have chosen this mission with this team in particular

Speaker 8 because of that reason. And I think that's what it takes ultimately at the end of the day, because we do not want to be acquired.

Speaker 8 We do not want to be part of big tech, because I think they don't have the tools to solve the problem in the way that we want to solve it.

Speaker 7 Yeah, it sounds like what AI needs in order for there to be like real competition is you need a founder who wants to go to bat and who wants to fight, essentially, who doesn't want to get, I guess, in bed with big tech but i mean one of the big themes that we've been seeing with ai recently is of course this circular financing stuff where these companies are investing and then the money comes back to them when they buy their products and

Speaker 7 it's hard to see the competition actually happening when you see everyone kind of collaborating with each other

Speaker 7 How do you think about that? And how do people in Silicon Valley, I mean, you're very tapped into Silicon Valley, Sequoia, one of the top firms, one of your investors.

Speaker 7 How do people view that in Silicon Valley right now? And are they concerned about it?

Speaker 8 People definitely make a lot of jokes about

Speaker 8 circular investing and that sort of thing. I mean, on one hand, I get it because there is a lot of

Speaker 8 interdependency of all these companies and there is a lot that they can do together, which I think on one hand is a good thing.

Speaker 8 On the other hand, it's a little bit

Speaker 8 inflationary to some like valuations or like revenue numbers or these types of things. I think on the net, AI will be so productive that it won't matter that much.

Speaker 8 But short term, it is a little bit like eyebrow raising, I guess.

Speaker 8 But at the end of the day, it's like, if you're, let's say, a foundation model company, you need to get the direct deal with NVIDIA because you want the GPUs.

Speaker 8 So you kind of, it's just one of those things that you kind of have to do.

Speaker 8 And I don't, I guess I'm not sure what an alternative would look like in a dynamic where you have four or five foundation model companies who are, let's ignore Google because they can make their own stuff, but who are really competing over the GPUs in order to make the next best models.

Speaker 5 We'll be right back.

Speaker 9 Adobe Acrobat Studio, so brand new. Show me all the things PDFs can do.
Do your work with ease and speed. PDF spaces is all you need.

Speaker 10 Do hours of research in an instant.

Speaker 9 Key insights from an AI assistant.

Speaker 10 Pick a template with a click. Now your preso looks super slick.
Close that deal, yeah, you won. Do that, doing that, did that, done.

Speaker 11 Now you can do that, do that, with Acrobat.

Speaker 10 Now you can do that, do that with the all-new Acrobat. It's time to do your best work with the all-new Adobe Acrobat Studio.

Speaker 12 Fifth Third Bank's commercial payments are fast and efficient. But they're not just fast and efficient.
They're also powered by the latest in payments technology built to evolve with your business.

Speaker 12 Fifth Third Bank has the Big Bank Muscle to handle payments for businesses of any size.

Speaker 12 But they also have the FinTech Hustle that got them named one of America's most innovative companies by Fortune magazine. Big Bank Muscle, FinTech Hustle.

Speaker 12 That's how Fifth Third brings you the best of all worlds, going above and beyond the expected, handling over $17 trillion in payments each year with zero friction.

Speaker 12 They've been doing it that way for 167 years, but they also never stop looking to the future to take their commercial payments two steps ahead of tomorrow, constantly evolving to suit the ever-changing needs of your business.

Speaker 12 That's what being a Fifth Third Better is all about. It's about not being just one thing, but many things for our customers.
Big Bank Muscle, FinTech Hustle.

Speaker 12 That's your commercial payments of Fifth Third Better.

Speaker 13 Support for this show comes from Shopify. Creating a successful business means you have to be on top of a lot of things.

Speaker 13 You have to have a product that people want to buy, focused brand, and savvy marketing.

Speaker 13 But an often overlooked element in all of this is actually the business behind the business, the one that makes selling things easy. For lots of companies, that business is Shopify.

Speaker 13 According to their data, Shopify can help you boost conversions up 50% with their Shop Pay feature.

Speaker 13 That basically means less people abandoning their online shopping carts and more people going through with a sale.

Speaker 13 If you want to grow your business, your commerce platform should be built to sell wherever your customers are. Online, in store, in their feed and everywhere in between.

Speaker 13 Businesses that sell sell more with Shopify. Upgrade your business and get the same checkout Mattel uses.
Sign up for your $1 per month trial period at shopify.com slash Voxbusiness.

Speaker 5 All lowercase.

Speaker 13 Go to shopify.com slash Voxbusiness. To upgrade your selling today, shopify.com slash Vox Business.

Speaker 7 We're back with First Time Founders. In terms of AI legislation, there seems to be a lot of debate right now on how do you regulate AI.

Speaker 7 And California is trying to be a leader in regulating.

Speaker 7 What are your views on AI regulation?

Speaker 7 Are people going over the top trying to regulate? Is it warranted? How do you think about that?

Speaker 8 Maybe just to draw some parallels.

Speaker 8 In my mind, I view things like climate regulation, nuclear regulation, and AI regulation to be similar in that they are global and local regulation doesn't really matter.

Speaker 8 Like, for example, pick any one of those three.

Speaker 8 If you make rules about in California, you can't have a gas car or you can't build like nuclear weapons or you can't build AI in the extreme in California, that doesn't really matter because that says nothing about the rest of the world.

Speaker 8 And if the rest of the world does it, it affects what happens in California for climate, for nuclear, for AI. And so I think for AI in particular,

Speaker 8 the regulation that is interesting is less, like I honestly, like, I think California just, it doesn't matter regulating AI state by state, at least at the macro level.

Speaker 8 Maybe it's like in terms of usage for like interpersonal things, sure. But in terms of like training models,

Speaker 8 the relevant stage there in my mind is the global stage. And how does it affect like US regulation versus European regulation versus China, let's say? From what I've seen thus far,

Speaker 8 the

Speaker 8 time spent on like state regulation is kind of wasted, at least as it relates to foundation models.

Speaker 7 I think there is a concern probably in Silicon Valley that everyone's so afraid of AI.

Speaker 7 I mean, I've seen these surveys that, you know, I think more than half of Americans are more worried about AI than they are excited.

Speaker 7 I guess that's something to philosophically tackle on your end because you're building it.

Speaker 7 But then I would imagine that in Silicon Valley, there's this feeling of everyone's just too scared because they've watched all these movies and they've watched the Terminator. And so these people are

Speaker 7 getting too, too worried about it to the point that we're regulating in a way that actually doesn't make sense.

Speaker 8 It's pretty interesting. I think two things come to mind.

Speaker 8 So, one, there's the classic phenomenon of, you know, you're a startup, you want no regulation, then you become big, then suddenly you want regulation. Yes.

Speaker 8 And we've seen that happen with, I think, basically every foundation model company, which is always a shame to see.

Speaker 8 And then the second, and this is more just like a comment on the Silicon Valley and some of the culture there. I know so many people who work at the foundation model labs who don't have savings.

Speaker 8 Like they just do not believe in like putting any money in their 401k. They like spend it all because of this like vision of like something's coming.

Speaker 8 Wow. Yeah.
Which is very weird.

Speaker 8 But then there are equally as many who work at these who are like, you know, these guys kind of drank too much of the Kool-Aid.

Speaker 8 It's really important to have these conversations and think about these things because

Speaker 8 I think it's actually, it reminds me a lot of thinking about like in theoretical physics, like thinking about the Big Bang and like black holes in the universe.

Speaker 8 The first time you think about it, it's kind of like scary existential crisis. What is everything? If we're in such a large universe, nothing has meaning, whatever.

Speaker 8 I think thinking about AI, like getting exponentially better, kind of leads to similar like existential questions.

Speaker 8 Like, what are we, like, what value do humans have if there's going to be something that's smarter than any one of us?

Speaker 8 And then you have the maturity of like, wait, intelligence is not why humans have value. That's not the source of intrinsic value.
We don't think someone's more valuable because they're smarter.

Speaker 8 So having these conversations and thought processes is, I think, very important for both people working in AI and people who aren't.

Speaker 8 But yeah, there's some pretty weird people who kind of like are really, really in the bubble, inundated in it, and who kind of get these interesting world views of like, you know, the singularity is coming.

Speaker 8 So I want to spend everything that I have now. Yet at the same time, if they think it's not going to be good, they remain working on it.

Speaker 7 So these AI engineers who are not saving any money, they're doing it because they think like the end of the world is coming or because they think that there's going to be some transformative event that will make them really rich.

Speaker 7 Like, is it more of a Duma perspective or a?

Speaker 8 It's a pretty big mix. Like, some people think we will just become in a world where we're like post-economic and just like money will be irrelevant.

Speaker 8 And, like, for anyone, there's some like base level, whether it's like some UBI type thing or, or some have like the doomer perspective. It's pretty, it's pretty bizarre.

Speaker 7 It sounds irrational to me.

Speaker 8 Yes, I would agree. Okay, you'd agree.
Yes. Yeah.

Speaker 7 And I think that it brings up an interesting thing in AI, which is there's this incredibly transformative once-in-a-generation technology that has come along.

Speaker 7 And it causes humans when that happens to act strangely. Yes.

Speaker 7 That behavior not saving while you're building AI because you think that it's going to mean some event that

Speaker 7 could either end the world or

Speaker 7 dismantle the system. Maybe they're on to something.
To me, it seems irrational. And I also think it...

Speaker 7 it says something about the potential of a bubble that is emerging that a lot of people in the in the last few weeks have been getting more and more concerned about and that more and more people seem to believe.

Speaker 7 I mean, you know, I think Sam Altman himself said the word bubble. There have been other tech leaders who are saying that.

Speaker 7 As someone who is building in this space, how do you think about that?

Speaker 7 Does it concern you or is it something that you're not too worried about?

Speaker 8 Obviously, just to be a responsible CEO, I need to have priors that there is some chance that something like that happens in like the broader economy where, you know, there's some corrections. Yeah.

Speaker 8 My priors are very low in particular because like the ground truth utilization of GPUs is just like fully, fully saturated.

Speaker 8 Now, it would be one thing if we're building out all these data centers for like the dream of, okay, we're going to saturate this compute at someday, but like we are doing that today.

Speaker 8 And it's like people are still hungry for more of that compute. Now, I think there's a good argument that a lot of compute is subsidized.

Speaker 8 So, like, NVIDIA might subsidize the foundational model companies. The foundational model companies subsidize companies like us and maybe give us discounts on their inference.

Speaker 8 And we might subsidize new growth users. And there's a little bit of that that I think that's the part that there's a concern of like actually drawing a similar comparison to Uber.

Speaker 8 I don't know if you remember when Uber first came out, rides were super cheap because it was very much subsidized.

Speaker 7 VCs were paying for us.

Speaker 8 VCs and so the LPs, so all the like pension funds were basically subsidizing people's Ubers in a very indirect way.

Speaker 8 And like people kind of, you know, sometimes can make, make jokes about that, even as it relates to LLMs.

Speaker 8 The reason I'm less concerned is that the ROI is just so massive.

Speaker 8 And like the productivity gains from in particular coding, it's like the fact that we have built factory with basically less than 20 engineers, that is something that pre-AI, we just would not have been able to do.

Speaker 8 And so I think the leverage that people are getting is what makes me less concerned. And also the speed of adoption.

Speaker 8 Like I think even some of these enormous enterprises that we're speaking with, they missed like mobile by like five years. Wow.

Speaker 8 But for AI, they are on it because they know if we have 50,000 engineers, we need to get them AI tools for engineering because of how existential it is.

Speaker 7 If there is a correction, and the way I see it is there will be a correction that won't wipe out AI like some people seem to think, but it'll be similar to the internet.

Speaker 7 There's a correction, valuations come down, there is some pain, and then long term, you will see massive adoption and massive value creation. That's just my perspective.

Speaker 8 Say there is a correction.

Speaker 7 Who wins in that scenario? Like what happens to OpenAI? What happens to startups like yourself? Like who are going to be the winners and losers in that scenario where we do see some sort of pullback?

Speaker 8 So one core principle is Jensen always wins. So

Speaker 8 for the last few years, Jensen's going to stay winning. So that's, I think,

Speaker 8 not going to change.

Speaker 7 And why do you say that? Because he's just at the very base of the value. Yes.

Speaker 8 Yes. Yes.
And at the end of the day, like all of these circular deals, they all come back to NVIDIA. Anytime anyone announces, hey, we're doing like free inference, that's free.

Speaker 8 But someone's paying Jensen at the end of the day.

Speaker 8 So I think that's kind of one baseline there. I think another, and this actually maybe relates to what we were talking about earlier about these companies and the acquisitions is

Speaker 8 as it relates to like startups and how many there are there was a period that i think has been dying down at least a little bit um in san francisco where if you're an engineer who like worked at ai for a month you basically just get stapled a term sheet like onto your forehead the second you leave and you know you show up to show up to a vc which i think is not good because you don't get like the travis kalaniks or the brian cheskis in a world where you're encouraged to do things like that.

Speaker 8 Like anytime anyone asks me like, hey, Maton, you know, I'm thinking about starting a company, I will always say no. Always.

Speaker 8 Because if me saying no discourages you from starting a company, then you absolutely should not have done it.

Speaker 8 And I think like, there's almost like too much help and too much like, yeah, you know, go do it, go start it.

Speaker 8 Because then it leads to some of these things we were talking about where the second the going gets tough, it's like, all right, acquisition time.

Speaker 8 And this is maybe my localized view because I live in San Francisco and that's like, you know, what I see more day to day than some of like the more macro trends.

Speaker 8 But I think the first place we would see a correction like that is in, I mean, coding, for example. There are like 100 startups in the coding space.

Speaker 8 You know, perhaps there will be less that are funded because it's like, hey, you know, at this point, maybe it's not as relevant.

Speaker 8 Or, you know, the Nth AI personal CRM, like that's another one that's, there's been like a million companies there.

Speaker 8 The correction might look like, at least at that level, you know, funding being a little more difficult, let's say.

Speaker 8 And then the way that that relates to the foundation model companies is I think eventually you'll get to a point where they can subsidize inference less, which just means growth probably slows.

Speaker 8 Like OpenAI and Anthropic, their revenue has been, you know, ridiculously large, but also the margin on that has been pretty negative.

Speaker 8 And so it's basically like, how long can you subsidize and like deal with that negative margin? They're obviously a legendary company. Uber is a great example.

Speaker 8 Amazon's a great example where you can like operate at a loss for a period of time in order to build an absolute monster of a company and then just turn on margin whenever you're ready.

Speaker 8 The question is, how long can you sustain that? And so if there were a correction, I think that would affect that.

Speaker 7 Yeah, it does feel increasingly that AI, the danger of AI isn't adoption or technology, it's a timing and financing problem. And, you know, I look at OpenAI and the amount that they're spending.

Speaker 7 I'm starting to believe that the AI companies who are going to win are the ones who manage their balance sheets the best. And it's really going to be a question of

Speaker 7 financial management because of the thing that you say there, where all of this money is being plowed in, and

Speaker 7 it is a question of how long can you go at an operating loss, which, you know, Uber crushed it. Amazon crushed it.

Speaker 7 There were many other companies that died that did not crush it from that perspective.

Speaker 7 So it will be really interesting to see how that plays out. As someone who is building in Silicon Valley, in San Francisco.

Speaker 7 You've built this incredible company that's generated a ton of heat and press.

Speaker 7 Like you are in AI.

Speaker 8 What does that feel like?

Speaker 7 Like, what does it feel like to be one of the AI people? Does it feel like you're in some special moment in time? Like, what is it like?

Speaker 8 It feels very much like we are still in the trenches because there is a ton that we want to do and that we need to get done. I think for me, the most surreal thing is the team that we've assembled.

Speaker 8 Like every day coming in person in our office in San Francisco, it is such a privilege working with now we're 40 of the smartest people that I've ever met in my life.

Speaker 8 You know, we're in New York right now. We're starting to to open up an office here.

Speaker 8 I think that's where it's a little bit like, whoa, like we're now, you know, we have two offices on the opposite sides of the country. It's more just like,

Speaker 8 I think it's just really cool to see over the last two and a half years how dedicated effort can actually like build something that is concrete and meaningful.

Speaker 8 And some of the largest enterprises that we're working with, it's just kind of crazy to sometimes stop for a bit when it's not like the non-stop grind to think like, this organization now doesn't have to deal with these problems because of something that we built because of this random cold email because this random hackathon that i met you know at i think it's it's just um

Speaker 8 it's a very cool visceral reminder that

Speaker 8 you can do things that affect things yes um

Speaker 8 and if you are really driven by a good mission you can make people's lives better in relatively short order and i think that's a really empowering thought what is something that you think

Speaker 7 the American population sort of gets wrong about AI and also about AI founders and the people building this technology.

Speaker 8 Most of the world only knows ChatGPT. Very few people know about like in San Francisco, everyone's like, oh, which model is better? Like OpenAI, Anthropic, Google, Gemini.

Speaker 8 The rest of the world, it's just like, it's basically just ChatGPT, which I think on one hand is interesting.

Speaker 8 I think on the other hand,

Speaker 8 it is really important for basically every profession to kind of rethink your entire workflow.

Speaker 8 And it is, in fact, I would say it's almost an obligation to like basically take a sledgehammer to everything that you've set up as like your routine and how you do work and rethink it with AI.

Speaker 8 For me, this is actually something that's really important because I'm like the most habit-oriented, like routine person and like constantly, you know, every few months being like, let me try and see how I could do this differently with AI in a way that's not

Speaker 8 like, oh, technology is taking over, but more just like it makes things more efficient and faster and

Speaker 8 more convenient.

Speaker 8 So I think that's one thing is there is so much time that can be saved by spending a little bit of time to, you know, try out these different tools, whether it's something like ChatGPT or, you know, if you're an engineer trying out, you know, something like Factory.

Speaker 8 I think regarding AI founders, it's hard to say because there are so many tropes that unfortunately can be really true sometimes.

Speaker 8 And sometimes it's even frustrating to me because I grew up in Palo Alto and hated startups. Like, really? I hated it.
Like, I grew up in middle school, we would spend time

Speaker 8 walking around downtown Palo Alto. And I remember I have a very concrete memory when Palantir moved into downtown Palo Alto.

Speaker 8 There were all these people in there, like Patagonias with like the Palantir logo. And I remember looking so like scornfully at all these people walking by with these Patagonias.

Speaker 8 But yeah, I mean, I think, I think it's maybe actually, I think the thing is less for the rest of the world about AI founders and more like some of these AI companies, it's really important to leave San Francisco.

Speaker 8 exit the bubble.

Speaker 8 Like it's a cliche, but like touch grass, go to see the real world, because while San Francisco is very in the future, you know, I've taken a Waymo to work for the last like two years, the rest of the world is still like kind of how it was in San Francisco five years ago.

Speaker 8 And I think it's important to have that grounding because

Speaker 8 if you don't leave, and if you don't have that grounding, you could do things like now put money in your 401k

Speaker 8 and things like that. Not that you need to put it in your 401k, but you kind of get these a little bit warped perspectives sometimes.

Speaker 7 That is really interesting. Does that, I mean, this idea that there is a

Speaker 7 bubble connotes the the wrong thing, but there is

Speaker 7 an echo chamber.

Speaker 7 And

Speaker 7 the fact that you're building and you're saying, you know, we're building these offices in New York.

Speaker 7 And the thing that is important for AI, and I think is probably really true, is to kind of go out into the world and understand like, what are some real use cases where this is really going to provide value for people, not just in your enterprise SaaS startup in San Francisco, but anywhere else throughout America.

Speaker 7 Does that worry you as you go further up the chain of power and command in Silicon Valley? Does it worry you, perhaps, that people at the very top aren't doing that enough?

Speaker 7 They're not getting out there and understanding what this technology really needs to be and do for America?

Speaker 8 I would say yes. And I think that's also just a very common problem just generally as organizations scale.

Speaker 8 or as organizations get more powerful, the people running those organizations inherently get separated from the ground truth of like, let's say, the individual engineers or individual people who are going and delivering that product to people.

Speaker 8 And I think similarly, they lose touch with their customers as well. I think the best leaders have really good communication lines towards the bottom.

Speaker 8 Yeah, towards the bottom, like the people, the customers they're serving are the people who are like kind of in the trenches, like hands-on doing the work.

Speaker 8 And I think you probably end up seeing this in the results of a lot of these companies because I think it's hard to be a successful company if you don't have some of that ground truth.

Speaker 8 Any good leader, I think, should be concerned about that and should always be paranoid of like, you know, am I surrounded by yes men or am I in an echo chamber and I'm not getting the real like ground truth?

Speaker 8 Yeah.

Speaker 8 Yeah, so that is that is something I step in mind.

Speaker 8 We'll be right back.

Speaker 4 What do walking 10,000 steps every day, eating five servings of fruits and veggies, and getting eight hours of sleep have in common?

Speaker 5 They're all healthy choices. But do all healthier choices really pay off?

Speaker 4 With prescription plans from CVS CareMark, they do. Their plan designs give your members more choice, which gives your members more ways to get on, stay on, and manage their meds.

Speaker 4 And that helps your business control your costs because healthier members are better for business. Go to cmk.co slash access to learn more about helping your members stay adherent.

Speaker 4 That's cmk.co slash access.

Speaker 1 Support for the show comes from Crucible Moments, a podcast from Sequoia Capital.

Speaker 1 We've all had pivotal decision points in our lives that, whether we know it or not at the time, changed everything.

Speaker 1 This is especially true in business.

Speaker 1 Like, did you know that autonomous drone delivery company Zipline originally produced a robotic toy?

Speaker 1 Or that Bolt went from an Estonian transportation company to one of the largest rideshare and food delivery platforms in the world. That's what Crucible Moments is all about.

Speaker 1 Hosted by Sequoia Capital's managing partner Rolof Boeta, Crucible Moments is back for a new season with stories of companies as they navigated the most consequential crossroads in their journeys.

Speaker 1 Hear conversations with leaders at Zipline, Stripe, Palo Alto Networks, Klarna, Supercell, and more.

Speaker 1 Subscribe to season three of Crucible Moments and catch up on seasons one and two at cruciblemoments.com on YouTube or wherever you get your podcasts. Listen to Crucible Moments today.

Speaker 2 Support for the show comes from HIMS. Hair loss isn't just about hair, it's about how you feel when you look in the mirror.

Speaker 2 HIMS helps you take back that confidence with access to simple, personalized care that fits your life.

Speaker 2 HIMS offers convenient access to a range of prescription hair loss treatments with ingredients that work, including chews, oral medications, serums, and sprays.

Speaker 2 You shouldn't have to go out of your way to feel like yourself. That's why HIMS brings expert care straight to you with 100% online access to personalized treatment plans that put your goals first.

Speaker 2 No hidden fees, no surprise costs, just real, personalized care on your schedule.

Speaker 2 For simple online access to personalized and affordable care for hair loss, ED, weight loss, and more, visit HIMS.com slash propg. That's hemms.com slash profg for your free online visit.

Speaker 2 HIMS.com slash profg.

Speaker 2 Individual results may vary.

Speaker 2 Based on studies of topical and oral minoxidil and finasteride, featured products include compounded drug products which the FDA does not approve or verify for safety, effectiveness, or quality.

Speaker 2 Prescription requires the website for details, restrictions, and important safety information.

Speaker 7 We're back with first-time founders. Who is like AI Jesus right now? Is it Jensen? Is it Sam Altman? Is it Mark Zuckerberg? Like in San Francisco, who's the guy? Who do people revere?

Speaker 8 I mean, it's got to be Jensen.

Speaker 8 Like Sam, you know, a lot of wins, some losses. Zuck, a lot of wins, a lot of losses.
Jensen, that guy just grinded for 30 years.

Speaker 8 I remember when I built a computer at home to play like video games on a PC, I bought an NVIDIA chip. And in my mind, it was like Nvidia, you know, they're the video game graphic card company.

Speaker 8 Now they are the most valuable company in human history with no signs of stopping. And he just grinded it out for 30 years.
Like it is the most respectable thing. He's also the nicest dude.

Speaker 8 And he has no, like, he doesn't have enemies. He's so.
You've met with him, right? I have. He's extremely generous with his time.

Speaker 8 He also knows, this guy's like, knows every little detail about factory. Like, I don't know how he has the time to do these things, but he's, he is a killer.
He's, he's really good.

Speaker 7 When you think about sort of the long-term future of AI and there was, you know, for many years, it was AGI is coming and think about all the things it can do.

Speaker 7 Think about how it could solve diseases. Think about how it could cure cancer.

Speaker 7 And then I see like erotica GPT

Speaker 7 and I see the Sora AI

Speaker 7 TikTok feed. And I'm sort of like, what happened to the

Speaker 7 big vision? We're back to sort of

Speaker 7 pornhub meets TikTok, but it's got AI.

Speaker 7 How do we expand the vision of AI?

Speaker 7 What is the grand vision for AI? And do you think it's going to really come true?

Speaker 8 Well, so I think on one hand, like, you know, the pure slop that is these like AI Sora or the one that Meta announced, I think on one hand. Vibes AI.
Yeah, vibes AI.

Speaker 8 I think on one hand, it's very, it's very...

Speaker 8 In a certain weird sense, it is beautiful in that it is just like pure human nature. Like, what do we do when we have really good technology? Like, let's make porn.
Like, that's the first thought.

Speaker 8 And in a certain sense, it's like, okay, I'm glad that even though we're generating all this technology, we're still humans at our core.

Speaker 8 We overestimated ourselves when we thought we could pure cancer. But then on the other hand, there are still people who are doing really great work.

Speaker 8 Like one of my friends, Patrick Sue, who runs Arc Institute, they're doing AI for biotech research and biology. And I think they're doing a lot of really cool work.

Speaker 8 And maybe this actually relates to something we were talking about earlier, which is, you know, people kind of at a first glance might have a little bit of an existential crisis of, you know, intelligence is now commoditized.

Speaker 8 So there's now, like some people are saying, you know, we both live in a world where if we have children at some point, our children will never be smarter than AI, right?

Speaker 8 Like we both grew up in a world where we are smarter than computers for at least a period of time.

Speaker 8 And our kids would never know that world, which is a little bit crazy because, you know, a huge part of growing up is going to college, becoming really smart in some certain area.

Speaker 8 And so I think now we're having a little bit of a decoupling of human value being attributed to intelligence.

Speaker 8 But then there's a natural question of like, okay, well, you know, we were sold this vision about, you know, let's say even the American dream of like, if you work really hard, get really good at this one thing, then you'll have a better life.

Speaker 8 But now it's like, you're never going to beat the intelligence of this computer. So what is the thing to strive for?

Speaker 8 And I think this actually relates to like the AI porn versus the AI curing cancer, which is, in my mind, the new primitive or the new maybe like North Star for humans is agency And which humans have the will instead?

Speaker 8 Like, yes, you can like hit the hedonism and just watch AI porn and play video games all day, but who has the agency to say, No, I'm gonna work on this hard problem that doesn't give me as much dopamine, but like because of the will and agency that I have, I'm choosing to work on this instead.

Speaker 8 Wow. And I think that might be the new valuable thing that if you have that in large quantities, that maybe that's kind of what brings you more meaning.

Speaker 7 Why do you go to agency versus many other things? For example, you know,

Speaker 7 you mentioned your friend who's working on issues in biotech. Maybe that is a question of like having the right values or,

Speaker 7 I mean, not to get like mushy, but maybe a value would be kindness or a value would be creativity. There are lots of things out there that you could pick and choose from.

Speaker 7 Why is it agency in your mind?

Speaker 8 I guess the way that I think about it, it's like the agency to go against maybe like the easiest path for dopamine. Yeah.

Speaker 8 Or like the like the natural, like human nature, like just give me like the good tasting food, the porn, the video games, the like, you know, easy, fun stuff.

Speaker 8 And I think maybe part of agency has to do with values. Like if you value creativity and if you value kindness, and you know, I think that is something that might motivate more agency.

Speaker 8 The agency is basically, at least the way I think about it, it's like the will to endure something that is more difficult for maybe a longer term reward, whether that's the satisfaction of, you know, bringing this, you know, better healthcare to people or

Speaker 8 satisfying that curiosity.

Speaker 7 It's interesting because you say the word agency and you are building agents and there's like a parallel there.

Speaker 7 And it's almost as if the people who are really going to win are the people who can have some level of command and directive, directive agency over these AI agents. It's the person who

Speaker 7 isn't just going to do what they're told by the guy who controls the AI agent and says, okay, create this code. It's the person who can actually tell the agents what to do.

Speaker 8 Yes.

Speaker 7 And that's the direction that you believe humanity and work should be headed.

Speaker 8 100%.

Speaker 8 And I think that's also like, if you think back to like the people that you've met in your life that come across as like particularly intelligent or like, you know, remarkable in whatever capacity, oftentimes it's not raw IQ horsepower.

Speaker 8 Like you'll note that when you meet someone with high IQ, it's pretty easy to tell.

Speaker 8 But growing up in the Bear, there are so many that are very high IQ, but aren't that, aren't that like high agency or like independent minded?

Speaker 8 And I think those are the people that oftentimes it's like really like leave a mark when you remember of like, oh, like that person was, you know, maybe they weren't even that high IQ, but they were very like independent, high agency.

Speaker 8 And I think that now is going to be much more important because great, you know, you might be born, have a lot of, you know, high IQ. Everyone has access to the AI models that have this intelligence.

Speaker 8 So it's not really a differentiator anymore.

Speaker 8 The differentiator is, do you have the will to use those in a way that no one has thought of before or in a way that's difficult, but to get some longer-term task done?

Speaker 7 It's really interesting because what you're describing is like, how do you, what can I do that AI cannot do? And what you're saying is AI cannot think for itself.

Speaker 7 It cannot be an independent, creative-minded creature. It can be a math genius.
It can solve problems within seconds, but it can't have the willpower to decide this is what I want to do.

Speaker 7 This is what is important to me.

Speaker 8 This is what has value, which I think is definitely right.

Speaker 7 We have to wrap up here. I just want to note, I saw a tweet

Speaker 7 I think from yesterday that you put out there,

Speaker 7 and it shows this

Speaker 7 competition of all the different

Speaker 7 coding agents. So you've got Cursor, and you've got Gemini and you've got OpenAI's

Speaker 7 coding agent. You are number one

Speaker 7 in agent performance.

Speaker 8 That's right.

Speaker 7 What does that mean? What does it mean to be number one? And how are you going to take that moving forward?

Speaker 8 This is a benchmark that basically does like head-to-heads of coding agents. And they use like an ELO rating system.
So it's like chess, where,

Speaker 8 you know, at a high level, you could have in chess, let's say

Speaker 8 if you have 100 losses against, you know, someone that's equal skill to you, but then you beat Magnus Carlson, you can have an incredibly high chess rating.

Speaker 8 So this is like an ELO rating system where it gives these agents two tasks and then it just has humans go and vote which solution they liked better, like the one from, let's say, factory versus Open AI or Anthropics.

Speaker 8 And we have the highest ELO rating. So in these head-to-heads, we beat them, which is pretty exciting.
I think it's exciting on a couple of fronts.

Speaker 8 One, we've raised obviously very little money compared to a lot of the competitors that are on that. And I think that goes to show that

Speaker 8 in a lot of these cases,

Speaker 8 being too focused on the fancy, like train the model, let's do the RL, let's do the fancy fine-tuning and all this stuff.

Speaker 8 Sometimes it doesn't give you the best like ground truth, like what is the best performing thing for an engineer's given task.

Speaker 8 Benchmarks are very flawed. They're not fully comprehensive of everything that it can do.

Speaker 8 But I think it's helpful when developers have a lot of choices out there to try and say, okay, well, like which one should I use?

Speaker 8 This one is nice because it's pretty empirical of like developers seeing two options and picking them and then consistently our droids win, which is which is pretty fun.

Speaker 7 Final question. What does the future of Factory look like?

Speaker 7 What do you think about when you look at the next 10 years?

Speaker 8 10 years is very hard because AI is pretty crazy. And I think humans are bad at reasoning around exponentials.
I would say the next few years,

Speaker 8 bringing about that mission of, you know, that world of developers being able to delegate very easily and just have a lot more leverage.

Speaker 8 Developers not need to spend hours of their time on code reviews or documentation. And I think more broadly, that turns software developers into like

Speaker 8 more cultivators or orchestrators and allows them to use what they have trained up for so many years, which is like their systems thinking.

Speaker 8 That's what makes engineers so good is they're really good at reasoning around systems, reasoning around constraints from their customers, from the business, from the underlying technology, and synthesizing those together to come up with some optimal solution.

Speaker 8 And with Factory, they get to use that to its fullest extent much more frequently in their day-to-day.

Speaker 8 And I think that is a net good for the world because that means there will be more software and better software that is created, which means we can solve more problems and solve problems that weren't solved before, which I think on the net is just better for the world.

Speaker 7 Matan Grinberg is the founder and CEO of Factory.

Speaker 8 This was awesome. Thank you.
Thank you, Ed.

Speaker 7 This episode was produced by Alison Weiss and engineered by Benjamin Spencer. Our research associates are Dan Shallan and Kristen O'Donoghue, and our senior producer is Claire Miller.

Speaker 7 Thank you for listening to First Time Founders from ProfG Media. We'll see you next month with another founder story.

Speaker 10 Adobe Acrobat Studio, so brand new.

Speaker 9 Show me all the things PDFs can do. Do your work with ease and speed.
PDF spaces is all you need.

Speaker 10 Do hours of research in an instant.

Speaker 9 With key insights from an AI assistant.

Speaker 10 Pick a template with a click. Now your prezzo looks super slick.
Close that deal, yeah, you won. Do that, doing that, did that, done.

Speaker 11 Now you can do that, do that with Acrobat. Now you can do that, do that.

Speaker 10 With the all-new Acrobat. It's time to do your best work with the all-new Adobe Acrobat Studio.

Speaker 15 Millions of players.

Speaker 6 One world.

Speaker 6 No lag.

Speaker 15 How's it done?

Speaker 15 AWS is how.

Speaker 15 Epic Games turned to AWS to scale to more than 100 million Fortnite players worldwide, so they can stay locked in with battle-tested reliability.

Speaker 15 AWS is how leading businesses power next-level innovation.

Speaker 14 Support for this show comes from Odo. Running a business is hard enough, so why make it harder with a dozen different apps that don't talk to each other? Introducing Odo.

Speaker 14 It's the only business software you'll ever need. It's an all-in-one, fully integrated platform that makes your work easier.
CRM, accounting, inventory, e-commerce, and more.

Speaker 14 And the best part, Odo replaces multiple expensive platforms for a fraction of the cost. That's why over thousands of businesses have made the switch.
So why not you?

Speaker 4 Try Odoo for free at odo.com.

Speaker 6 That's odoo.com.