Scaling Legal AI and Building Next-Generation Law Firms with Harvey Co-Founder and President Gabe Pereyra

44m
In just over three years, Harvey has not only scaled to nearly one thousand customers, including Walmart, PwC, and other giants of the Fortune 500, but fundamentally transformed how legal work is delivered. Sarah Guo and Elad Gil are joined by Harvey’s co-founder and president Gabe Pereyra to discuss why the future of legal AI isn’t only about individual productivity, but also about putting together complex client matters to make law firms more profitable. They also talk about how Harvey analyzes complex tasks like fund formation or M&A and deploys agents to handle research and drafting, the strategic reasoning behind enabling law firms rather than competing with them, and why AI won’t replace partners but will change law firm leverage models and training for associates.

Sign up for new podcasts every week. Email feedback to show@no-priors.com

Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @gabepereyra | @Harvey

Chapters:

00:00 – Gabe Pereyra Introduction

00:09 – Introduction to Harvey

02:04 – Expanding Harvey’s Reach

03:22 – Understanding Legal Workflows

06:20 – Agentic AI Applications in Law

09:06 – The Future Evolution of Law Firms

13:36 – RL in Law

19:46 – Deploying Harvey and Customization

23:46 – Adoption and Customer Success

25:28– Why Harvey Isn’t Building a Law Firm

27:25 – Challenges and Opportunities in Legal Tech

29:26 – Building a Company During the Rise of Gen AI

37:24 – Hiring at Harvey

40:19 – Future Predictions

44:17 – Conclusion

Press play and read along

Runtime: 44m

Transcript

Gabe, thanks for doing this. Of course.
Thanks for coming. Maybe we can just start with, like, for anyone who hasn't heard of Harvey, what is the company?

Can you talk about the scale and who you serve today? At Harvey, we're building AI for law firms and large in-house teams. We're almost at 1,000 customers, 500 employees.

Started about just over three and a half years ago, and so been kind of scaling quickly since then. And kind of you guys were some of our Boje seed investors.
So yeah, good to be here.

Maybe from a most basic perspective on the product, why is it not just, you know, Copilot or ChatGPT or Claude? Yeah, I think that's how the product started.

So when we first raised from OpenAI, we got access to GPD-4.

And I think GPD-3 to GPD-4 was such a big model jump that the intuition at the time was just give the model to lawyers and have them play with it. And I think that industry was so text heavy that

you got so much value from just interacting with the models.

And then I think as soon as you gave it to lawyers, you also ran into all of the sharp edges of the models of they hallucinate, they're not connected to a bunch of our context.

And so I would say the past, the kind of first two years of the company were how do we build essentially the IDE for lawyers around these models that connect it to all of the contexts you need to be productive as an individual lawyer.

But I would say in the past year and going forward, the big problem we're solving is not how do you make individual lawyers more productive?

It's how do you make a team of lawyers working on a client matter more productive?

And more importantly, how do you make an entire law firm working on thousands of these client matters more productive and more profitable?

And so I think when you get to that scale, a lot of the problems we're solving are not just model intelligence problems.

They are these orchestration, governance, and kind of all of the enterprise product problems that you run into at scale.

You've also been broadening from just law firms into enterprises into big companies using you in concert with both their in-house legal teams and external counsel.

Can you talk more about that and how that's been evolving as well? Yeah. So we started selling to the largest law firms.

And something that started happening about a year and a half ago was these law firms started showing Harvey to their clients.

And their clients both wanted to collaborate more effectively with their law firms. And they also wanted to use this directly in their in-house department.
So we recently announced we signed Walmart.

We're working with AT ⁇ T, a bunch of these Fortune 500 large private equity firms, Global 2000, kind of the largest consumers of legal services. And what we're starting to build is...

a platform for the in-house teams to do the work that they do internally.

So things like contracting and this long tail of all of the legal operations you need to do that you typically don't send out to law firms, but also the collaborative tissue of I'm working on a large transaction or litigation.

I need outside expertise. I want to securely share this data with my law firm.

And there's a lot of technical problems there around security, data privacy that we want to solve so these law firms and their clients can collaborate effectively.

I think for, you know, we have a largely technical audience, but also most people don't know exactly what legal workflow looks like. Yeah.

I think before we really started working together, I imagined it as just like, well, what do you mean?

I like email my lawyer and he thinks about it and like reads a document and then sends something back.

And there's like redlining involved somewhere and maybe there's negotiation. Yeah.
Can you paint a picture of just what workflow means to you guys? Yeah. Yeah.

And I think a lot of people, when they think about legal, they think of consumer legal. And so I have a lease and I need to get input on that.

That's completely different than what these massive law firms are doing.

And so I think a really good example of what these firms are doing that, I mean, you guys will be familiar with, and I think a lot of people in the startup space will be familiar with is what law firms do for venture capital firms or private equity firms.

And so VCs, PE firms, they do two main things. You raise money and you invest it.
And so the main thing is- And we do podcasts.

And podcasts, which is actually important, but there's less legal work there.

And the important things you need to do there are fund formation so how do i structure the entity that is going to hold all that money and it sounds easy but if you're a large private equity firm you have a sovereign wealth fund that comes in and they say we need to structure it in this way because of tax implications then you have a pension fund that has these other requirements and so it ends up being this incredibly complex process of how do you draft the limited partnership agreement which can be 100 pages every investor that's investing you can have a hundred and they all have side letters that modify that and you need to understand if I modify it this way, it's going to have these implications.

And a lot of it is the project management that also goes around coordinating all of these products if you're raising a billion-dollar fund.

And then once you've created that fund, there's all the investments you do out of that fund. And so, for example, when we did any of our series, you need to get a data room.
We share a bunch of data.

You look at that. You need to understand the contracts we have to make sure that the revenue we say we have is actually structured in the way we've claimed.

And, you know, are there litigation all these things and so it's this massively complex process of understanding and i think one analogy you can think of like understanding a code base but the code base is all of these contracts and all this legal work and i think the reason

legal is so difficult is the workflows aren't structured. So the same way with programming, it's really hard until these models to build tools for programmers.

You basically just had an IDE and then programmers did stuff in all the different languages, but you didn't have like, oh, here's a tool for Python, here's a tool for C ⁇ .

Legal is kind of the same way. And so I think a lot of why you're seeing traction in programming and legal is I think there's a lot of analogies of these workflows of they're so text heavy.

And the workflows, until you had these models, you couldn't structure them in the way I think you can now.

So one of the directions that people are going on the coding side is to build things that are being called like agentic.

And it's very early on in terms of what agentic means, but basically being able to deconstruct a logic tree in terms of what are the set of actions that you need to take in a certain situation.

And then having the AI agent go back and sort of check each one of those items, do it, go on to the next item, double check it against the prior one. Do you do that from a legal perspective?

Or is that something that's a little bit more in the future relative to where code is today? Yeah, we're starting to do this now. And actually,

like when I was at DeepMind, a lot of the... RL research I did was that.

And so when we first got access to GPE4, we had the very strong intuition of, okay, you're going to be able to, you know, string a bunch of these model calls or eventually do things like reasoning models where the full agent is differentiable.

And

even the first day we got access to GPD-4.

Winston went in his room for 14 hours and just redid a bunch of his associate tasks.

And when I looked at the work he was doing, it was essentially like this hacky egentic where he said, okay, I would need to go look up this case law, summarize it, take that summary, use it to draft.

And so seeing him do that gave us the intuition very early on of that's the direction this is going. And you can kind of think of associates as agents.

They get this task from a partner that's, hey, I have this high-level case strategy. I want to see if I can find a bunch of case law that supports it.

Can you go research that, look it up, cite it, write me a memo. And so a lot of the systems we're starting to build look a lot like that.

And I think one interesting direction that the coding labs, the research labs are going is building these RL environments where you deploy these agents and they can interact with a code base and see if they can pass unit tests.

And in legal, that RL environment is a client matter. So you have all of the context of a fund formation, an acquisition, a litigation, and the models are starting to learn.

Let me go in the document management system and see if I can find this, go in the data room or do case law research, get feedback from the partner.

And so I think that research direction is super, super interesting. It's really interesting to make the associate analogy.

because i remember um when i led your guys's series b which i think was maybe two years ago now it's a while ago i called a lot of your big customers and talked to the head of the law firm or talked to the head of you know um some of these institutions and one of the things that i thought was really striking is number one they were adopting legal software which before was really hard to sell into them and because of what you were doing being so striking and important they were adopting you really fast the second is um they weren't threatened by it.

And I thought, oh, they'd be threatened because it may augment or eventually replace certain aspects of law, et cetera, or help sort of change that dramatically.

And one of the insights they kept bringing up that I thought was really interesting is they said, as we think ahead, as this sort of AI tooling and agentic workflow spread through Harvey and companies like you,

how do you think about the future of a law firm? Because instead of hiring 100 associates, of which you assume 10 will be partners eventually, maybe you only need 50, maybe you only need 20.

And so are you even hiring enough people to know who'd be a great partner? Yeah. Because you're going to shrink the set of people that are needed to do certain tasks overtime.
Right.

Right now, that isn't true. It's augmentation.
It's expanding business. But that could happen in the long run.

How do you think about the future of law or what law firms will look like or the evolution of all that? Yeah, this is a great question. I think it's changed a lot in the past couple of years.

I think something we are starting to talk with law firms a lot is.

how do we think of training the future generation of partners where to your point these law firms have these leverage ratios where you have a lot of associates but much less partners And there is value to that because not everyone is going to become a partner.

And part of going through that process is how you find, oh, this is the person that I would trust to do this very complex acquisition because they've gone through that experience.

And so I think the part I'm optimistic about is if I think about

over 10 years ago, when I learned to program, it was. super painful, right? Like you had to go on Stack Overflow.

It was hard to learn multiple languages because you're like, okay, I'm just going to like learn Python. I'm going to learn TensorFlow or something.
And And it was just like hard to even learn that.

It was hard to ask questions. When I was at Google, you don't want to ask a bunch of questions because people will be like, oh, you don't know that.
And I think stuck all the time. Yeah, exactly.

And now with the models, it's like programming is so fun to learn because you can just be like, here's how to write this in Python, translate it. Why is it written this way?

And you can learn this so much more quickly.

And we see lawyers doing that with Harvey, where they'll say, generate this merger agreement. Why did we structure it that way? And so we're already starting to see some of that.

But I think the really big opportunity for law firms is how do they take all of the internal partner feedback and data that they've created and use that to start training i think that's one big piece i think another conversation we're having is to your point how do you just generally start restructuring firms i think this is one where

We have some intuitions, but a lot of it is going to depend on the firm, the region, the size, their specialty, the types of clients they serve.

I think one of the things that's very challenging with law firms is they are really a collection of all these practice areas.

And so the firms that specialize in litigation look different than the firms that specialize in large transactions versus like mid-sized transactions.

And usually the big firms do a collection of these. And so a lot of what we're spending time is practice area by practice area.

Can we go and sit with here's the fund formation group and their private equity clients and start thinking about what that would look like in terms of the workflows, the staffing, the pricing?

and i think it is a really interesting problem where a lot of the value in the product and the platform is not just the product itself but how do we help enable these firms to transform and so when you think about it from that perspective like our goal is how do we make these law firms more profitable and it's not just a product problem it's thinking about their holistic business and where do we fit in and kind of that bigger picture yeah it's really interesting because when you look at the set of functions that a partner fills and I'm thinking in particular of consulting firms and less about law firms, simply because I'm a little bit more familiar with consultancies, some of it's the pattern recognition, the high-level thinking, the strategy.

And then part of it is like the sales

and really being able to make that client connection.

And so it's interesting to, to your point, think about more broadly, how can I augment all parts of their business versus just the legal workflows. Yeah.

And to your point, I don't think that part changes, where it's like, when we think of the, like, we're now larger consumers of legal services.

And when we think of the best partners we've worked with i don't think the models are doing yeah what they do anytime soon and i think what's interesting is i think the role of law firm partners actually doesn't change that much in the same way i don't think the role of very senior engineers changes with this because you're largely delegating work and what you're getting paid to do is here's the high level strategy here's the right abstractions go write the code or write do the legal research to help me do it and i will interface with the client and so i think that my guess is that doesn't change too much, but some of the like lower level functions do change because of this technology.

One of the things that you said that I thought was interesting, like in another conversation that we were having, was that there's an analogy that you could make between like a great senior partner, like a Gordon Moody type, and like a distinguished engineer working on systems at Google.

Right.

I think for a more technical audience or just a general business audience that doesn't really know what Gordon Moody does, they might assume what a lot said, which is like, isn't like 50% of that like his network

or his reputation?

But, you know, what you were pointing out is like, there's expertise in the ability to predict like a sequence of arguments that is going to get you to the answer you want or manage risk. Exactly.

How does that, how does that translate to an RL environment or a task for you? Yeah, this is a good question.

So for maybe for background context for the audience, Gordon Moody was a partner at Wachtel, which is one of the top transactional firms in the world that joined us early on and is now an advisor. And

kind of the analogy I was giving is:

why is a senior distinguished distributed systems engineer at Google so valuable? And a lot of it is the experience they have architecting these systems that none of this is public.

This won't go into the models for a long time.

And so, if you're building search at Google, these people can just point out, hey, if you build this system this way at this scale, it's going to collapse for some reason that is super not intuitive.

And one of the examples that Gordon talked about early on was he was a part of

when Michael Dell took Dell private and then restructured it and took it public again. And this was like a multi-year,

super complex financial and legal restructuring of an incredibly large business. And what he, when you talk with him, is incredibly good at.

It's the same feeling as when you talk with a very senior engineer, where he can just, he has the whole picture of this legal entity in his head.

At the time, they had to do the largest debt offering of all time. They had to create, they had to invent a new financial instrument.

And so it's just understanding if I need to raise this much money to do this part of the transaction, this is how I would structure it.

And so, a lot of the value he brings is not just the relationship, but it's just that technical understanding of how you architect these things the same way that how you architect like very large software projects.

And I think when that translates to an RL environment, part of what is missing from the public models is

the process of looking at one of these entities and figuring out, given all of the context of I want to do this merger, this is the right way to structure it. Just that process.

And a lot of reasoning trace, right? Exactly. For an expert, just like it would be in code.
Yeah.

And if you looked at that data set, for one of those transactions, it would be client comes to Gordon, says, I want to do this large merger acquisition.

and then there would be meetings and emails talking about okay this is the background of the two companies this is roughly how we would structure them these are all the things we need to look into and a lot of the data would be Gordon giving these tasks to associates to say okay look into these risk factors of similar transactions we've done they would do research and say okay maybe we could structure it this way and then he would point out this really subtle thing that hey actually in this case if you structured it this way this thing's going to happen

but none of that shows up Like, all you get from these public mergers is like an SEC filing.

And so, you do see the final result, but most of the value or what you need, I think, to eventually improve these models is the decision-making process, the same way you need these reasoning traces to train these models to do kind of any of these reasoning tasks.

One of the, as you mentioned, like the labs are all very focused on

RL scaling in

like coding and math domains.

I think those is like highly verifiable, right? Not perfectly so, but like, how do you think about the appropriateness of like law for RL, given it's not as easily verifiable?

Yeah, this is this is one of the biggest problems. And I remember we had conversations early on when we were trying to figure out what is the right evaluation structure.

So I think the hardest thing about legal is most of these tasks are very long-form text generation. And so

there are definitely subsets of legal work that are super verifiable of go in this data room and just find all the change of control provisions that you can kind of build these traditional data sets.

But for something like generate this merger agreement, it's really hard to just give some binary like, this is good or this is bad. And I think this has been like a big research problem.

Like with all the labs we work with and also internally, there is just this open question of how do you build? that reward function.

And if you think of what that reward function is at the law firms, it's the partners, right?

Like at the end of the day, there is no way to verify this besides the senior partner who's done a bunch of these said, yeah, this looks pretty good.

And so internally, these law firms have a bunch of data of here's all the edits that went into this and the feedback.

And so we are starting to think about how do you use that to train these reward functions. But I would say that is one of the really big problems.

But I think one of the interesting things is I think you actually have the same problem in programming, where I think in the short term, programming is verifiable where you can look at unit tests.

But once you get into real software engineering, like the unit, there is no unit test. It's like I deployed this.
It's a system design.

Yeah, it's like I deployed this and a million users used it for six months and it didn't crash.

And so you get, and it's like mergers are the same where it's like you can make sure the filing is correct.

But at the end of the day, it's like three years later, the companies are still merged and they didn't take on litigation they didn't expect or something like that.

And it's like that is eventually the really valuable human experience, right?

That's what you pay really good software engineers or really good lawyers, where they have that decade-long track record of they can build these systems and they haven't fallen apart.

And a lot of these stuff, the same way you can't unit test, it's like hard to verify. So it is, I think, this really interesting open research problem.

One thing that you guys are doing on sort of the other end from pushing the bounds of what Harvey products and the models can do is like just get them deployed.

And you recently started this for deployed engineering force.

This is confusing to me because I'm like, well, you're not necessarily like an application building company, which is how people have traditionally thought of FTE. Like, why are you doing this?

I would say the model that we're operating under is not a full palantir, let's go into the code base and kind of build custom software.

I would say this is closer to like Sierra's agent engineering

program. But what we're starting to run into into a lot is, I think early on, we did a really good job of building a horizontal platform.

We did not that much customization for customers in the sense of building specific things for specific customers.

I think the thing that was nice about legal was we could build things like workflow builder and things into the product that would let customers customize the product.

And then for very large customers like PwC, we did some customization.

But now now we're getting to the point where when we're starting to talk with law firms about, hey, we want to take a bunch of this data and help you build a model or build agents.

There is some amount of we need to go in this environment, into your environment and figure out how to connect all the data. We're starting to connect to a lot of their like business systems.

So their billing systems. governance systems.
And then especially when we start working with the Walmarts, the very large banks, the Fortune 500, they're much less standardized than these law firms.

And so there is just this massive amount of work where we go to a large bank and they say, we don't have any document management system for our legal department. Can you just build us one?

And

there is a massive amount of demand of we just want smart technical people to sit here and help us think about our business and our operations and how we should start mapping that.

into Gen AI systems.

And for us, it's a really good way to figure out the roadmap where, for example, GluAL is like one of the fastest growing private equity firms that we recently started working with. And

they we meet with them all the time. And they're just like, there's all these things that we feel like we could map into Gen AI.

We don't quite know what it's going to look like, but let's just sit together and figure it out. And so I would say that's a lot of the genesis of the.

of the program of how do we just get more people that can work with all these customers and start kind of paving the way of some of these new roadmaps in different verticals.

Yeah, I think what you're describing too is a very standard enterprise playbook.

And I think in Silicon Valley, people almost forgot because of the SAS era that if you're Oracle, if you're Dell, if you're IBM, if you're any of these larger organizations, this is how you sell software.

Exactly. Right.
You have a platform, you have a bunch of customization around it. People have bespoke data sets.

They may not always have the ability internally or enough people to implement certain connectors or systems.

And this is like the standard way to do it. And then as you do it over and over again, you start repeatedly turning that into part of the platform.

And I think a lot of these started with doing something that resembled FE, and then you get big enough that you get this like implementation ecosystem.

There's all these third parties that will come in and implement. They'll be like a certified like vendor or whatever.
Exactly.

And I think the interesting thing we're actually starting to see is law firms are starting to do this for their in-house clients.

So they're starting to go and take Harvey and go to their clients and say, hey, buy Harvey and we'll help you build all the workflows and implement it because we have the scale and the expertise to build this, where typically these in-house teams,

the smaller ones don't have like the budget or the in-house to build this. And so, yeah, I think there is kind of a lot of things.

That seems like that could be a good revenue driver for the law firms that you work with in terms of a new exact business that they can offer. Yeah.

And some of them are starting to think about it that way. Yeah.

I was really struck by it, it wasn't, I don't know if it was day zero, you can correct me, but it was within the first year where the very first version of Harvey was really an individual lawyer productivity tool, right?

I'm an associate or a more senior person at a law firm. I want to get a piece of work done.
Can you just make it less painful?

But the transition quickly to like, how do we transform the business, make the business more profitable, like organize teams being in the ecosystem, I think happened, happened pretty quickly.

And like anything that is a business transformation just requires like, you know, a lot of engagement. Yep.
And so I

given how much you guys have invested in customer success and how that's like driven adoption, I feel like a big piece of it is just how quickly AI has happened. Yeah.
Right.

I would not necessarily have predicted that all the customers you're working with would be like, yes, in year one and two of this company selling, we're adopting. Yeah.

But, you know, part of it is you guys are helping them. Right.
Yeah. No, and I think this was like when I look back, it was still surprising how quickly some of these law firms adopted this.

Like our first customer, we actually met through you introduced to an ex-partner who's doing business school here. And he introduced to David Wakeling at A ⁇ O.

That was in our first year, and they went from a small pilot to firm-wide and investing in this. And I think,

I mean, I think you're seeing this in a couple of verticals with like cursor open evidence, where this technology is so transformative for these industries that just are so tech-heavy and knowledge-based, they just haven't had tools like this.

That I think early on, we did find these customers that were like, oh, this is worth really betting on. But yeah, I think the pace has still been pretty surprising.

I asked the internet through X what questions we should ask you.

And a popular one was like, why aren't you guys building a law firm? Are you going to build a law firm and compete with all your customers? Yeah. No, we get this question.

And I mean, I think when we first started Harvey and we were doing research, we actually talked to 30 people from Atrium.

And I think also interestingly, Sam and Jason, who was the GC of OpenAI at the time and was the GC at Y Combinator when they did the Atrium investment.

What struck us was the people who worked there said it was a really good idea and they were super excited about the prospects and then there were some challenges around the legal and the execution.

But when we dug more into it, the big challenge that they ran into was you're essentially just building two different companies, right? You're building a law firm and you're building a tech company.

And it's already really hard to like build product engineering, do AI, scale sales. And I think the big issue you run into if you try to do both of these is: I think you can only do one thing well.

And doing a law firm well is very different than building a software company well. I think that's one point.
The bigger point is

for us, it feels like the best outcome is if we can figure out how do we make every law firm, how do we help every law firm become an AI first law firm, not how do we build one ourselves.

And I think the real problem we're trying to solve is can we make every law firm more profitable? And a part part of that is how they work with their clients.

And can you make their clients get better, faster, cheaper legal services?

And I think solving that equation at scale is a much bigger opportunity than if you build a single law firm because you get conflicted out. You can't scale this.

And so I think this is probably like this is something we don't do, but we've gotten this question. But yeah, I think it's kind of not the focus for the company.

Analogous to other markets in software, law feels like an area where I've been very surprised personally about

how large the scope of the problem is eventually, if you're really ambitious about what you can do.

I didn't realize like you were telling me, you know, if you do a really large MA, let's say of like two global companies, Microsoft, Activision, or something, you're like, there's a hundred outside council firms here.

You know why? Because in New Zealand, where both companies have customers, you have like a tax implication and the dude who understands that lives in New Zealand. Yeah.
Oh, it's great. Yeah.

It's crazy. And so I think like, you know, like other markets, like the SMB version of this looks really different from the like high-end enterprise version of this.
Yeah.

And so I do think it's like hard. It just seems hard to imagine like coalescing all of that expertise in a law firm and a software company at the same time.

Versus I'm like, well, Harvey now has like, what, 40 customers in New Zealand?

Exactly. Yeah.
And I mean, if you think about those transactions, it's also not just law firms.

So there's investment banks and you maybe have PWC or a tax advisor advisor and there could be an HR consultancy that helps you think about how you're merging headcount.

And so for us, the bigger opportunity seems how do we build the platform that lets professional service providers and their clients collaborate.

And I think a lot of the problems you need to solve there are.

like the biggest is the secure collaboration across many of these entities, the secure data sharing. How do you build and deploy AI systems across these very complex projects?

And I think to your point, the scope of this, like legal is a trillion, professional services is something like three to five trillion. There's just this massive amount of room to grow.

And we think our expertise is going to be in building the product, the technical systems, the AI systems that enable that.

And we want to give that infrastructure to all of the different law firms rather than compete with them, because I just don't think you can.

I think one thing that's really striking about this sort of wave or era of AI is that there's deeply technical people building giant companies in really different industries.

And you come from a research background. You worked at one of the major labs in terms of foundation models and other areas, RL environments, reinforcement learning.

What has been your biggest surprise in terms of transitioning into being a founder and running a company and building something from the ground up like that?

I think maybe not surprise, but biggest like mental model shift is

I think the 10 years before Harvey, I was doing a mix of mainly AI research and then trying to start companies, but always largely as an IC.

And I think the shift from this started working and scaling, just how much I had to change my mental model of the type of company we're building, how you do this at scale, how you operate.

I think that was the biggest surprise or like thing that I've had to.

had to change, but it's been a crazy experience kind of going from, you know, Winston and I in an Airbnb to 500 people in like three and a half years.

And then I think also how you build these products at scale and kind of the complexity of this industry, like that has been like a really hard but interesting experience.

It's been amazing to see what you all have accomplished. You know, it's, it's such a short period of time.
And

I was thinking back to like when we pitched to both of you guys like three and a half years ago and we were like, hey, AI plot for legal. And you guys were like, sounds good.

But we like, I mean, we had some of these ideas, but I think they've really like. I think a really important aspect of that, too, is you all started this company

before GPT-4 came out and before a lot of the shifts in the models happened. And so I remember you showing side by side GPT-3.5 versus 4 and what you were doing worked on 4, but not on 3.5.
Yeah.

And you were part of that very early wave that had conviction this was so important as a trend. Was that because of your experience in the labs? Was it something else? Like what drove you?

Because not many people were actually starting AI companies when you all got started. Yeah.
And it was kind of just to your point, AI plus legal, like nobody was doing that. Yeah.
Yeah.

It's something where now everyone's like, oh, this is such an obvious idea. But at the time,

yeah, now, yeah, text in, text out. But at the time, yeah, no one was thinking about this.
I think it was a combination of a couple things.

A lot of the best people at the time I had worked with had gone to open AI. And so I was working on large language models at Meta.
You saw GPT-1, GPD-2, GPT-3.

If you were working in AI for the past 10 years,

kind of one of the big problems were how do you pull all this together? Because you built systems where, okay, this is really good at vision.

This is really good at like specific things, but no one really had the like general solution. And

you saw things like Lambda.

And I think with that trend, like what I'd seen is anytime you kind of make that initial, okay, this is how you do it, you can usually just scale and the stuff keeps working. And so with 3.3.5,

you were like, this is getting really interesting, but it's not quite there. And so the bet was, okay, OpenAI may be one of the people to crack it.
Like, I know a lot of the people there.

That was part. And then I think the other big part was just Winston was a lawyer.
And I think we had become super close.

Never thought we'd start a company together, but just the way I heard him talk about the legal industry, like he...

even though he was a first year associate, just had this intuition of not just the work he was doing, but the the structure of the firm.

Like, I would hear him talk about the firm and be like, here's what all the different partners are doing. Here's like why our firm strategy is this way.

He had, he was in the process of convincing some partners to leave to start a law firm with him, which is insane. For like, and so it was just like, okay, this will be really fun.

He showed me a bunch of his legal tech. It was like, this seems like the perfect application.
And then when we saw GBD4, I was just like, oh, the time is now.

Like, this is the perfect like application.

I think it is like really noteworthy. Uh, where I was actually, like, you know, even six months into

working with you guys, being like, well, are capabilities really going to advance that quickly? And both you and Winston were like, absolutely. Right.
Like, we should, we should have the ambition to

take on the full complexity of like any type of legal work that's possible because the models will keep getting better.

And, like, that seems like an, like, a, like a super obvious mainstream point of view today. Yeah.

But in, I don't know, the middle of 2022, I think it was like a, um, it was a strong, unique intuition to have. Yeah.
Yeah.

I think that was something we did really well, where we just had this belief where I think the same that you see with the programming products, where if you had built something where it's like, all this does is like check that your Python code doesn't have bugs, which you could have done better with 3.5, like you wouldn't have built something like cursor.

And the intuition was just these models can help you do any programming task in any programming language. And I think we felt that same way in legal.
And I had a bit of intuition.

I did like a bit of investment banking, private equity, and it was the same workflows where you could just do any of them with these models.

And so I think keeping the product open-ended enough that it gave us the room that now we can build into all these things and like professional, other professional services, I think that was super important.

It's a, it's a really interesting analogy because for code, it took an extra two years, I think, for the main coding companies to really emerge as the ones that are likely to win. Yeah.
Right.

And so you folks started, I think,

three and a half years ago, and you had a product almost immediately and you were up and running really fast. And then I think

Cursor didn't really launch its IDE until 24 months ago, something like that. Yeah,

and then Cognition was slightly in that era. And then obviously Cloud Code.
Six months later.

So everything kind of came kind of in a time-delayed way for code, even though GitHub Copilot was one of the first products and everybody knew that that was really important.

And I always i think that's really interesting because there were so many coding companies that got started under the premise but somehow it's these ones that start a little bit later that really were the ones who took off and so i always wonder why is that yeah you know what caused that

my guess is my part of my intuition here was just you guys were like let's say like very um

uh age is less trendy as we're now but very capability pilled from the beginning right so both you winston and you as a shred of an investment banker turned ai researcher were both like it's going to be able to do yeah so much.

I think the coding people thought that too. Yeah, they were.

I think people were a little less ambitious like three years ago. At least

some people were going to go and build giant models. And, you know, I actually feel like people were very ambitious.
I just think that maybe you folks immediately focused on product.

And that was part of the difference. I think it was finding the right.
form factor. And I think in legal, it was maybe a bit more obvious where the initial form factor was essentially like,

like the initial feature we built that none of the products had at the time was upload a document and do something with it, right? And that is a lot of legal tasks. Yeah.

And it was that, and then do really accurate citations. And when you showed people that, they were like, oh, this is crazy because that's so much of my job.

I think with coding, the initial models were also not quite as good, that you needed maybe a bit more capabilities of the base models.

And then you needed, I think, figuring out the right way to like integrate this into the ID.

But I mean, I remember where it's like the first version of the product that I built, it was mainly like I used GPT-4 because like most of my background was like distributed systems and AI research.

And I was, I still don't know React. I was just like, yeah, JavaScript and kind of like putting this together.
But I'd be like, hey, GPT-4, like help me make this.

And like you could kind of already see it at the time with programming. And that was part of what gave me the like intuition that I could analogize to like what Winston was doing.

You mentioned that you folks have gone

from basically the two founders to 500 people over the last three and a half years or so. You're obviously growing really quickly.
The business is working. You know, you have tons of customer demand.

What are you hiring for? What are you looking for in terms of the next set of employees or what types of roles are you hiring for right now? Yeah, on the technical side, so we mentioned FDEs. I think

looking in general, like across roles of just strong engineers, and then I would say maybe specific call outs. We just hired a site lead for New York, so starting to scale up that office,

more folks on kind of front end and scaling product in general and then

more ai folks as well so but yeah anyone strong engineer like please apply okay last question for you given how many pull-ups can you do just kidding we did find out yeah well i i don't know if that was the max but these guys can both do 15 pull-ups guys

with a wink in the middle

okay okay okay guys we get it um

you mean in what set in one set in one set okay we got to do the 24-hour challenge yeah oh what's that Just how many can you do in a day? No.

Yeah. It's a lot.
Yeah. You can upload to your TikTok.
Let's put it on the no priors TikTok. Does anyone have a TikTok? No.
Okay. I don't know.

I don't have a TikTok. Yeah.
Do you know what TikTok is very good for? Erewhon videos. Oh, so good.
Yeah, that's really good. Super good.
Yeah, there's some really funny. Erewhon videos? Yeah.

Like the LA. Yeah, like Spoos and Erewhon, or like people like, like there's one where it's the Miami girl visits Erewhon.
Oh my God. She's like, Eriswan?

Like, are you one?

so it's very good i highly recommend they'll have that yeah we send her a couple yeah no that and twitter i feel that's where all my time goes yeah these tick tock erwon videos

okay you can pick we'll pick one of these two i asked some other people involved in the company

what questions should i ask gabe oh boy we covered some of them but one of them was like why do you still sleep on an air mattress

okay

so I don't sleep on an air mattress. I have a good mattress.
I don't have a bed frame is where that's coming from. And so what happened is when we moved from LA to SF, my bed frame broke.

And then the first year and a half of the startup, things were so crazy. I was like, this is what a startup founder should do.
And at some point, I was like, I need to get a bed frame.

And I ordered one and

it came. And I got a call from the apartment.
And they were like, hey, you didn't sign out. You didn't fill out the insurance, like the rent, the mover's insurance.
So we can't let them bring this up.

And I was like, okay, I called them. I was like, hey, do you guys have renters insurance? They're like, we're UPS.
We don't, they're just like, we don't do that.

And then I was just like, I don't have time to deal with it. And I haven't dealt with it.
I was just like, we have other problems to solve. Okay.
Okay.

So physically can't do anything except the company. Right.
Yeah. Yeah.
And pull-ups. Exactly.
Yeah. Other question was,

you know, there's a bunch of foresight in starting Harvey when you guys did. When you look forward, do you have a prediction that you think others don't necessarily agree with you right now?

That is not mainstream. So one comment I'll definitely make on the foresight is I think a lot of like we've gotten comments of like, oh, overnight success.
And oh, you saw this coming.

And I would say I actually just spent the decade before Harvey trying to start a company like Harvey. So I think it was just, I was super early.

And then eventually it was like, oh, now is the right time. And then you were kind of in the right position.

My guess is, I think people now are catching up to how capability pilled, as you called it, like winston and i were i think people

in silicon valley i think people have a good sense of where these models are going but i think generally people don't appreciate how much better they're going to continue getting it's hard to internalize it's really weird yeah it's really weird and i build things and i'm like oh my god code gen works it just really works now it's crazy yeah and to me it's like i think the interesting thing will be the transition from like these models are really smart individually but if you think about like a lot of what we've done in the past 20 years with SaaS, it's how do we use software to make these massive organizations?

And I think that will be the continued trend where a lot of what we're starting to think about is like law firms have like 10x in size compared to before computers and the internet.

And I think that's going to happen again, but in like maybe a different way than the past 20 years. But I think that to me, like a lot of people still talk about co-pilots and individual productivity.

And I think a lot of

the the things we're starting to think about is like organizational productivity and like how do you build these systems at scale where both for our internal engineering team like i think a really interesting question for the for the cursors the codexes is like making someone program 20 faster doesn't make you build a product 20 faster and so starting to think about like what is the broader infrastructure you need so these companies can develop software and product faster and then kind of same analogy to legal i think that's kind of one of the things we're thinking about that I maybe don't hear people talk about as much.

Kind of collaborative AI in some sense. It's sort of like the Figma transition of your individual contributor designer versus working collaboratively with the design team.
Exactly.

And what you're talking about is doing that for law, doing that for code, doing that for different verticals, and having AI as a layer on top of that. So it's super interesting.
Yeah.

And I think to that point, it's like, how do, how are humans and AIs going to work super effectively?

Because even at these large companies, you have huge teams of different specialized people that have different different functions.

And I think when I hear a lot of people talk about these models, they kind of talk about it as like, oh, AI will just get smart and do all of this. And I don't think that's the way this evolves.

The same way it's not just like, hire 100,000 people and now you've built Walmart. It's like so much of it is like

how you, yeah, three million, yeah, three million. Actually, yeah, how you organize all of these.
And I think that will be like one of the really interesting problems for these.

Yeah, I'm seeing that a lot in the context of both AI driven roll-ups as well as this company Brainco that helped get up and running running.

Where a lot of the AI implementation issues are around people management, workflow, optimization.

It's much less about can you build the AI and much more, how do you actually change the organization to be able to adopt it properly?

So, yeah, no, and we're starting to work with a lot of private equity firms that I think it's interesting, like starting to see how they're thinking about that.

Cause I think that will be like really interesting space.

Awesome. Thanks, Gabe.
Thanks for coming on. Thanks so much for having me.

Find us on Twitter at no priors pod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.

That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.