From Job Displacement to AI Trainers, Brendan Foody on Work in the AI Age
Show Notes:
0:00 Introduction
0:16 Building Mercor
3:00 Identifying outlier talent with AI
9:07 How AI is reshaping the workforce: job displacement & evolution
11:18 What skills should we invest in now?
12:18 Verifiability
13:36 Evaluating models
16:07 What should kids learn today?
17:05 Evaluating taste in talent assessments
18:45 Future of data collection
26:07 Humans’ role in the AI economy
28:53 AI as a contributor vs. a manager
33:03 Mercor’s goals
34:50 Evolution of labor markets
36:00 Hiring advice
Press play and read along
Transcript
Speaker 2 Hi listeners and welcome to No Priors. Today we're chatting with Brendan Footey, co-founder and CEO of MerckBoar, the company that recruits people to train AI models.
Speaker 2 Mercker was founded in 2023 by three college dropouts and teal fellows. Since then, they've raised $100 million, surpassed $100 million in revenue and run rate, and are working with the top AI labs.
Speaker 2 Today, we're talking about where the data for foundation model training will come from next, evaluations for state-of-the-art models and the future of labor markets. Brendan, welcome to No Priors.
Speaker 2 Brendan, thanks so much for doing this.
Speaker 1 Yeah, thanks for having me. Excited to be here.
Speaker 2
So you guys have had a like a wild last six months or so. There's huge traction in the company.
Can you just talk a little bit about what Mercor does?
Speaker 1 Yeah, so at a high level, we train models that predict how well someone will perform on a job better than a human can.
Speaker 1 So similar to how a human would review a resume, conduct an interview, and decide who to hire, we automate all of those processes with LLMs, and it's so effective.
Speaker 1 It's used by all of the top AI labs to hire thousands of people that train the next generation of models.
Speaker 2 What are the skills and job descriptions that the labs are looking for right now?
Speaker 1 It's really everything that's economically valuable because reinforcement learning is becoming so effective that once you create evals, the models can learn them and how to
Speaker 1 improve capabilities. And so for everything that we want LLMs to be good at, we need evals for those things.
Speaker 1 And it ranges from consulting to software engineers, all the way to hobbyists and video games and everything that you can imagine under the sun.
Speaker 1 And it's really whatever capabilities you're seeing the foundation model companies invest in, or even application layer companies invest in, the evals are upstream of all of that.
Speaker 3 And are you also helping companies outside of the core foundation models with a similar type of hiring? Or is it mainly just focused on AI models right now?
Speaker 1 Yeah. So actually, when we started the business, it was totally unrelated to human data.
Speaker 1 It was just that we saw that there were phenomenally talented people all around the world that weren't getting opportunities and we could apply LMs to make that process of finding them jobs more efficient.
Speaker 1 And then we realized after
Speaker 1 meeting a couple of customers in the market that there was just this huge vacuum because of the transition in the human data market and that the human data market used to be this crowdsourcing problem of how do you get a bunch of low and medium skilled people that are writing barely grammatically correct sentences for the early versions of ChatGPT.
Speaker 1 And it was transitioning towards this vetting problem of how do you find some of the most capable people in the world that can work directly with researchers to push the frontier of model capabilities.
Speaker 1 But we've still kept that core DNA of hiring people for roles, human data and otherwise. And a lot of our customers hire for both.
Speaker 3 Do you think all of hiring eventually moves to these AI systems assessing people, or at least all sort of knowledge work?
Speaker 1 I think certainly, because we're already seeing on most of our evals that models are better than human hiring managers at assessing talent, and it's still like the very early innings.
Speaker 1 And so I think we'll get to a point where it'll almost be irrational to not listen to the model, right? Where people trust the model's recommendation.
Speaker 1 And like maybe for legal reasons, we'll still have the human pressing the button and making the final sign-off.
Speaker 1 But where we just trust the model's recommendations on who should be doing a given task or job more than we trust a humans.
Speaker 3
I guess in any field, people say that there's 10x people. There's 10x coders who are way more productive than the average coder.
There's 10x physicians or investors or you name it.
Speaker 3 Do you see that in terms of the output of your models? In other words, are you able to identify people who are outliers?
Speaker 1 Totally. This is one of the most fascinating things is that
Speaker 1 the power law nature of knowledge work frames the importance of performance prediction.
Speaker 1 And that imagine if you can understand like the kinds of engineers on an engineering team that are going to perform in the 90th percentile, right?
Speaker 1 Or even if you could say, I know that this person that costs half as much is going to perform in the top quartile.
Speaker 1 It frames like how you think about the value that we create for customers and how you think about the long-term economics of the business.
Speaker 1 And it all ties back to how do you measure the customer outcomes and really go on them.
Speaker 3 And is it a power law or what's where a distribution is it? Because people always talk about human performance. as a bell curve.
Speaker 3 Do you think that's actually true or do you think that's the wrong way to interpret human performance relative to knowledge work?
Speaker 1 It's very industry by industry, right? Like for you and in investing, right? It's like the most power law thing imaginable. And where it's just like the top handful of companies each decade are the...
Speaker 1 are the ones that matter such a disproportionate amount. And it's the investors that went in those versus if you're hiring like factory workers, right? It's a much more commoditized skill set.
Speaker 1 There is a lot less of a difference. And I think like software engineering is somewhere in between.
Speaker 1 It's definitely very power law, but I don't think it's as power law as, say, like the handful of best investors in the world.
Speaker 2 Do you have a prediction for either because of the distribution of like skill level or the measurability, like where you should expect that models are better at evaluation or identification of talent beyond
Speaker 2 human data first?
Speaker 1 Yeah, so it's really everything that you can measure with text. The models are really good at.
Speaker 1 Like if you can ask questions in an interview and read through the transcript, the models are superhuman at that
Speaker 1 across many more domains than one would think. Like it's not, it's more domain agnostic than I would have initially anticipated.
Speaker 1 I think the things where models are going to be slower is on the multimodal signals and understanding like how passionate is this person about what they're working on, right?
Speaker 1 Like how persuasive are they or good at sales? And those capabilities will come, but they'll just take a little bit more time.
Speaker 1 So that's my mental model for thinking about it right now.
Speaker 2 Right.
Speaker 2 So, like, if I'm interviewing a candidate from one of our companies and they are saying the right words about, you know, motivation level, but I don't believe it, like, that might be a next-level signal.
Speaker 2 If I, if I have any predictive power here.
Speaker 1 Totally, totally, exactly. The other thing is that the models are way better at high-volume processes.
Speaker 1 And an example is like, say, you're assessing 20 people for the same job, or and you hire those people, you see how they perform.
Speaker 1 It's very easy to attribute features of each person's background to how they perform, right? It's sort of the stack ranking.
Speaker 1 We can understand, like, this person had this nuance in their interview, or this person had this nuance in their resume, and that was the thing that explained how well they performed on the job.
Speaker 1 Versus, if those 20 people are performing 20 different jobs, then it's just this like mess of figuring out like what is causing what things to happen.
Speaker 1 It's way more difficult to understand what features are actually driving signal. And so I think it'll be those higher volume processes that also get automated first.
Speaker 2 Is there anything that
Speaker 2 surprises you about like basically the discovered features in terms of, I don't know, any domain that you are working on today that identifies amazing talent?
Speaker 1 That's a very good question.
Speaker 2 Or maybe in engineering, because that's relevant for many of our listeners.
Speaker 1 Yeah, I think that one of the really interesting things for engineering is that there's so much signal about a lot of the best engineers online that I don't think people properly tap into, right?
Speaker 1 It's everything ranging from their GitHubs to the personal projects on their website to the blog posts that they wrote during college.
Speaker 1 It's just because there's like, it's bottlenecked by manual processes. The hiring managers don't have time to read through all this stuff, right?
Speaker 1 They don't have time to, or with designers, they don't have time to consider every proposal or
Speaker 1 images from someone's Dribble profile before doing their top of funnel interviews.
Speaker 1 And so I think one of the things where people are under-indexing on Signal the most is the things that can be found online.
Speaker 1 But then a lot of the things that can be indexed on during an interview, like how passionate is this person? Does this person have the skills that it would require for the job?
Speaker 1 I think humans are relatively good at. At least
Speaker 1 they're a little bit more adopted right now.
Speaker 3 Are there hidden signals for other types of domains where there's less online work? An example of that would be physicians, lawyers. There's a lot of other professions where
Speaker 1 there's all sorts of these hidden signals. Like
Speaker 1 one one interesting one we've seen in the past is that people who are based internationally but study abroad in a Western country tend to work much more collaboratively or communicate better with people.
Speaker 1 And it's like they're the kinds of signals that make sense when you look backwards and evaluate them, but are hard for like a human without having full context of like everything happening in the market to really understand and appreciate.
Speaker 1 And there's often like one of the most important things, as you can imagine, is just how intrinsically motivated and passionate are people about a domain.
Speaker 1 And so looking for signals of not just like on their resume and in their interviews, as well as online, of like, what indicates this thing, right?
Speaker 1 Like, how do we, and it pertains not just to who you hire, but also what those people should be working on, right?
Speaker 1 Imagine the nuance between hiring a biology PhD to work on like biology problems versus hiring the person who wrote their thesis on drug discovery to write like problems and like come up with innovative solutions contextual to their thesis.
Speaker 1 And there's just so much inefficiency with the way that we do matching and the way we use all those signals right now.
Speaker 3
So you're evaluating people. Are you also doing evaluations of the models relative to the people? Yeah, yeah, of course.
And then
Speaker 3 when, or what is your view in terms of the proportion of people who are eventually get displaced by these models?
Speaker 3 In other words, if you can tell the relative performance and you can look at relative output, how do you start thinking about either displacement or augmentation or other aspects like that?
Speaker 1 I think displacement in a lot of roles is going to happen very quickly, and it's going to be very painful
Speaker 1 and a large political problem. Like I think we're going to have a big populist movement around this and all the displacement that's going to happen.
Speaker 1 But one of the most important problems in the economy is figuring out how to respond to that, right?
Speaker 1 Like how do we figure out what everyone who's working in customer support or recruiting should be doing in a few years. How do we reallocate wealth
Speaker 1 once we approach super intelligence?
Speaker 1 Especially if the value and gains of that are more of the power law distribution.
Speaker 1 And so I spend a lot of time thinking about like how that's going to play out.
Speaker 1 And I think it's really at the heart of it. What do you think happens eventually?
Speaker 3 X percent of people get displaced from like color work.
Speaker 3 What do you think they do?
Speaker 1
I think there's going to be a lot more of the physical world. I think that there's also going to be a lot that of like niche.
What does the physical world mean?
Speaker 1 Well, it could be everything ranging from people that are creating robotics data to people that are waiters at restaurants or
Speaker 1 are just like therapists because people want like human interaction.
Speaker 1 Like whatever that looks like, I think all of, I think that automation in the physical world is going to happen a lot slower than what's happening in the digital world, just because of so many of the like self-reinforcing
Speaker 1 gains and
Speaker 1 a lot of self-improvement that can that can happen in the virtual world, but not physical.
Speaker 2 Do you have a point of view on like what types of skills, knowledge, reasoning are worth investing in now as a human expecting to stay economically valuable?
Speaker 1 So, Sam Ulman said this thing when someone asked him this about how people should optimize for just being very versatile and like able to learn quickly and change what they do.
Speaker 1 And I think that resonates a lot because there's so many things that one would think the models aren't good at, that they get very good at very fast, that I almost think you just need to be able to like navigate that quickly.
Speaker 3 What are the characteristics of those things that you think models will learn the fastest? Like if you were to say, here is a heuristic,
Speaker 3 what do you think are the components of that model?
Speaker 1 If it's verifiable. For things like math or student code that are verifiable, they will get solved very.
Speaker 3 So you want a feedback loop or utility function, they're optimizing against the model.
Speaker 1 For things that aren't verifiable, like maybe it's you're tasted a founder, right? That's much harder to automate.
Speaker 1 And it's also a very sparse signal because, yeah, there's just not that much data on it.
Speaker 2 This is a pretty fundamental research question right now, but like, what do you think are the most interesting ideas about verifiability beyond code and math?
Speaker 1 Well, I think that there's ways that you can
Speaker 1 have certain auto-graders or like criteria that humans can apply.
Speaker 1 And I'm very interested, or that models can apply those criteria. And I'm very interested in how that will play out over time.
Speaker 1 And there's obviously a lot of other domains where models will take unstructured data, they'll structure it, they'll figure out how to verify it. And it's very like industry by industry.
Speaker 1 I think it's going to be hard for one lab to do everything there.
Speaker 1 And there's going to be
Speaker 1 more specialization as we progress further and further and marginal gains in each industry become more challenging.
Speaker 2 How much do you believe in generalization from the code and math type reasoning and intelligence? Like, if I'm this much better at proof math, does it make me funny eventually?
Speaker 2 Me being the intelligence.
Speaker 1 Yeah, I generally believe in it.
Speaker 1 But to a certain extent, like you still need a reasonable amount of data for the new domain and to kickstart it.
Speaker 1 But there's going to be a lot of transfer learning.
Speaker 3 I think it's very funny when Sarah does proofs. So I think it all finishes.
Speaker 1 She gets bad proofs.
Speaker 2 I actually think being bad at proofs is funny.
Speaker 2 Okay, let's talk about evals because you're working on the bleeding edge of model capability.
Speaker 2 There has been this whole sense of what people call evaluation crisis around like the models are so good and they're somewhat indistinguishable at the fringe of capability today that we don't know how to test them, you know, ignoring all the issues with
Speaker 2 people
Speaker 2 gaming the benchmarks, right? What do you think, how, like, what right ideas are there about evaluating models, especially as they become superhuman?
Speaker 1 Well, I think one of the most important things is that a lot of the evals historically have been for like zero shot of a model or like a test question, right? That might be academic.
Speaker 1 When the thing that we actually need to eval is like, what's economically valuable work, right? When a software engineer goes to their job, it's so much more than writing a PR.
Speaker 1 It's like coordinating with all of the relevant parties to like understand what does like the product manager want and how does that fit into the priorities of each team and how does that all translate to like the end output of work.
Speaker 1 And so I think we're going to see an immense amount of eval creation for like agents.
Speaker 1 And that is the largest barrier to automating most knowledge work in the economy.
Speaker 2 Where should people start? Like that feels not terribly generalizable.
Speaker 2 So Sierra has something called Tau Bench that I think people are trying and there are other efforts here, but it is perhaps like more specific to a certain function.
Speaker 1 Yeah, I think that people will need to have these by industry and they should probably start with tasks that are more homogenous, right?
Speaker 1 Like it's going to be for customer support tickets, I think that's a great example because there's like one interface that the customer support agent interacts with.
Speaker 1 Maybe they call a couple of tools like accessing the database or reading through the documentation. But it's a relatively like homogeneous uniform task.
Speaker 1 I think the things that are going to be more challenging, but also
Speaker 1 in many cases more valuable, are creating evals for these very, very diverse tasks, right? Of all the things that go into making a good software engineer. And that's going to be really hard to do.
Speaker 1 Like, I think it's going to be a years-long build out for even some of the verifiable domains, because there's so much that goes into a good software engineer of like, how do they have taste for like, you know, what is the right way to approach a problem or what are the products that people really enjoy using?
Speaker 1 And I'm really excited for that.
Speaker 3 So if you were to counsel people with young kids, say your child is, I don't know, five to 10.
Speaker 1 Yeah.
Speaker 3 Should their kids learn computer science?
Speaker 1 I would probably not push them towards teaching their kids computer science, but I'm not totally against it. I think that most things
Speaker 1 I would encourage them to just like find something that's intellectually stimulating they're really passionate about where they can learn general reasoning capabilities.
Speaker 1 And those like reasoning capabilities will probably be very valuable and cross-applicable. I like always loved building companies growing up and like hustling and doing small things like that.
Speaker 1 And I think that is something that could be helpful. But I am skeptical that like the really valuable thing is just people who can code in five years.
Speaker 1 I think it's much more likely like the people that have these contrarian ideas around what's missing in markets
Speaker 1 and have the taste of what features and nuances need to go into solving that problem.
Speaker 2 You said taste a few times. Are there signals of taste that you feel like you can discover in any domain?
Speaker 1 Yeah, absolutely. I mean, I think that oftentimes you just want to see the softer signals of how people think about certain problems.
Speaker 1 And certain people have intuitions,
Speaker 1 whether it be the way they approach a problem or if they're looking at different products, how they notice nuances. Yeah, it's very industry.
Speaker 1 It's it's very contextual to the industry but it's important to measure score it like what's the what's the positive feedback loop here we've done a variety of things but oftentimes we will give people like a problem that as closely as possible mirrors what they would solve on the job and then we would see how they compare to other people um and so that helps with scoring it you ask some further thought process as part of that i know for example it's almost like looking at like code reviews or other other sort of intermediate work along the way relative to something.
Speaker 1 We definitely do. One thing I've realized about talent assessment is that a lot of people focus too much on the proxy for what they care about rather than the thing they actually care about.
Speaker 1 And so ideally, you want to measure the thing that you actually care about.
Speaker 1 So if it's that person building an MVP of the product, ideally you have an interview that's like a scoped-down version of doing that.
Speaker 1 The place where you need to use proxies is when it's like a longer horizon task where you just want to structure the proxy to get as much signal as possible.
Speaker 1 And so that's sort of how I think about talent assessment. Yeah.
Speaker 2 Can I ask this scale of impact question? So if I think about the very largest employers today, like let's call it like low single digit millions of employees.
Speaker 1 Yeah. Right.
Speaker 2 Or I don't know anything about contractors and Amazon workers and such, but
Speaker 2 how many people do you think like will end up doing data collection?
Speaker 1 I think it's a huge volume. I think the reason is that it all comes down to like creating evals for everything in the economy.
Speaker 1 I think part of that will be current employees of businesses that are creating evals for that business so that those agents can learn what good looks like. Part of that will be
Speaker 1 hiring out contractors through a marketplace to help build out those evals. But it would not surprise me if that becomes the most common knowledge work job in the world.
Speaker 3 How long does that last? So effectively, people are being brought on to displace themselves.
Speaker 1 This is true.
Speaker 3 Is that a six-month cycle? Is it a two-year cycle? Like, what is the length of time at which people have relevancy relative to some of these tasks?
Speaker 1 there's always like a frontier so i think the unless it becomes superhuman right yeah unless it becomes superhuman yeah it's almost like time to superhuman but i had an interesting conversation which is that like you don't even know that you have super intelligence without having evals for everything because it's like you sort of need to understand what is the human baseline and like what is good it's like grounded in this like understanding of human behavior yeah a friend of mine basically um believes that you know nightquest theorem which is uh basically if you're sampling a signal like you need to be able to sample it twice the frequency in order to be able to actually extrapolate what it is.
Speaker 3 Otherwise, you're not sampling richly enough to know.
Speaker 3 And so he views that there's some version of that for intelligence.
Speaker 3 Like you can tell somebody smarter than you, but you don't know how much smarter because you aren't capable of sampling rapidly enough to understand it.
Speaker 3 I mean, so I always wonder about that in the context of super intelligence or superhuman capabilities in terms of how smart can you actually be since it's hard to bootstrap into the eval.
Speaker 1 Well, so I think like when you take it to the limit and you have super intelligence, what you're saying makes a lot of sense.
Speaker 1 But another way I think about it is that if we classify knowledge work in two categories, one is like solving an end task where it's sort of a variable cost of like you need to do that repeatedly.
Speaker 1 And the other is creating an eval to teach a model how to solve that task, which is like a fixed cost that you do one time.
Speaker 1 It does seem structurally more efficient for work to trend away from the variable cost of like doing it repeatedly towards this fixed cost of how do we build out the evals and the processes for models to do this themselves.
Speaker 1 That said, it all comes down to like, how fast are we approaching super intelligence, right?
Speaker 1 Like, if we, if the models are just like getting that good that fast, then sure, I don't think we would need humans creating evals very much, but I also then don't think we would need humans in many other parts of the economy.
Speaker 1 And so you sort of need to be thoughtful about the ratio of that.
Speaker 3 Does that create an asymptote in terms of how good these things get, or do they start creating their own evals over time?
Speaker 1 I think that they'll play a role in creating their own evals. Which bootstraps?
Speaker 1 Yeah, where they might come up with certain criteria for what a good response looks like, and humans validate that criteria.
Speaker 1 However, I think you often need to ground this in like the experts in that particular domain. Sure.
Speaker 3 But I'm just thinking of like MedPalm or something, right? Where MedPalm2, where the output of the model was better than the average physician.
Speaker 1 It was basically like a health model that Google built.
Speaker 3 And they would use physician panels to rate outputs of the model versus individual physicians, and the model did better by far than individual physicians.
Speaker 3 At some point, it should do better than the physician panels, where feedback from the physician panel should make the model worse, right?
Speaker 3 In other words, if you just RL'd it off of individual physicians, the model already was going to get worse.
Speaker 3 And so there's a little bit of this question of how much, when does human scoring create worse outcomes because the humans aren't as good at a task?
Speaker 1 Well, I think the models will be able to... delineate between the valuable human knowledge and the human knowledge that's not valuable.
Speaker 1 And that maybe you have doctors that create like a bunch of evals for this particular task and the model realizes like, wow, like I see the mistake that the doctor made on these particular tasks, but I'm going to ignore them.
Speaker 1 And like, here are the things that seem insightful or the things that I can learn.
Speaker 1 And the models will,
Speaker 1 yeah, use that data and value that data immensely.
Speaker 1 The other thing I'll say is that I think it's easy to look at these evals and the rate of improvement on the evals and just like think we're a lot closer to super intelligence than we are.
Speaker 1 But the truth of the matter is like there is a lot between being really good at SweetBench and like replacing software engineering, right?
Speaker 1
There's like all the coordination problems that we talked about. There's like so much else that goes into that.
And I think that we're just going to need a lot of evals for tool use.
Speaker 1 We're going to need a lot of evals for agents. And that build out is not, is going to be a lot longer than a couple year time horizon.
Speaker 2 How do you think about incentives for all of these like expert knowledge workers?
Speaker 2 Because the opportunity cost for a great software engineer with like taste and architectural understanding is a great job at Mercor or another interesting tech company versus of the
Speaker 2 geo-arbitrage on just basic knowledge work does not exist in you know as the skill level increases over time.
Speaker 2 That's true in coding, it's true for physicians, it's true for finance people, lots of areas where you might want evals and labels.
Speaker 1 Totally. I think that it'll definitely become more power law over time, which means that like the best people are going to, of course, make an incredible amount of money.
Speaker 2 Do you think it's more just turn up the dial on like what any piece of information is worth from the higher skilled workers?
Speaker 1 Yeah, yeah.
Speaker 1 But you also want the evals at the frontier of what the models can't do.
Speaker 1 And so, it might be that for like a very well-scoped problem, like answering a medical question that someone has, you might need to get the like you know, world-class doctor that is like
Speaker 1 one of the handful of people that's able to be better than the model of that very well-scoped problem. But for like the broader agentic problem of like, how do we, you know,
Speaker 1 talk about this case in a way that the like patient is receptive to? How do we then like, you know, coordinate with these set of tools to
Speaker 1 help to complete the diagnosis and send whatever emails at X time.
Speaker 1 Like, I think for those kinds of things, I still expect that the bulk of the bell curve, people that are closer to the mean of the distribution will be able to contribute for a longer period of time.
Speaker 3 What do you think is the biggest shift that nobody's really anticipating that's coming? It could be domain specific, it could be broader.
Speaker 1 Well, so maybe I'll answer this in two parts.
Speaker 1 I think about nobody, it's like it feels like the bulk of the country is not really coming to grasp with how fast jobs will be displaced. And that just feels like a big problem, as I said before.
Speaker 1 And I think that we need to stay very proactive
Speaker 1 as a government, as an economy, et cetera.
Speaker 3 Are there certain areas where you're already seeing large-scale job displacement that you don't think is being reported on?
Speaker 1 It's definitely being reported on in customer support,
Speaker 1 in recruiting. I think one of the challenges is that a lot of this happens at economic contractions when people get more efficient, get more focused on bottom line.
Speaker 1 And so I think that, yeah, a lot of it hasn't happened yet, but it's going to happen imminently. And then, in terms of things that like maybe no one even in like San Francisco is thinking about,
Speaker 1 which is another interesting part of that problem, is that these agentic evals for non-verifiable domains is under-indexed on significantly. Another thing is that people
Speaker 1 in San Francisco have a tendency to not think critically about the role humans will play in the economy because they're so focused on automating humans.
Speaker 1 And so I think that it's important to think more about that problem.
Speaker 1 One thing that I've thought about is that
Speaker 1 Ideally, models should help us to figure that out over time, right? Like what are the things that people are passionate about? What motivates them?
Speaker 1
And maybe it doesn't need to be an economically valuable thing. Maybe it's just like a certain kind of project that they like working on.
And I think that people aren't
Speaker 1 indexed enough on how humans will fit into the economy in 10 years.
Speaker 3 You know, one thing that I feel that I've really
Speaker 3 misunderstood or didn't quite understand the scope of was the degree to which we effectively had different forms of UBI or universal basic income in different sectors of the economy.
Speaker 3 Government is a clear example where there's enormous waste, fraud, grift, et cetera, happening.
Speaker 3 Parts of academia, if you just look look at the growth of the bureaucracy relative to the actual student body or faculty, big tech, if you look at some of the size of it, you know, basically it shows that a lot of these things were effectively UBI.
Speaker 3 And so to some extent, one could argue that parts of our economy are already experiencing what you're saying in terms of there's
Speaker 3 high-paying jobs that may or may not be super productive on a relative basis.
Speaker 3 And so the question is, is that something that we actually embrace as a society, given some of these changes in displacement? And if so, where does that economic surplus come from?
Speaker 1 Yeah, it's interesting. I think that as we have better analytics around the value of employees, it seems intuitive that these companies will become,
Speaker 1 you know, start doing more layoffs, more cuts, et cetera.
Speaker 3 Do you think those evals become illegal at some point? Because it feels like that happened a little bit with certain aspects of merit or merit-based testing for different disciplines or fields.
Speaker 3 It happened with the government in the 70s where they removed it as a criteria.
Speaker 3 I'm just wondering if that becomes something that more generally people may not want to adopt because it exposes things, or do you think it's something that is inevitable economically?
Speaker 1 There's definitely going to be pushback, but I think it's inevitably economically because it's hard to regulate and just like so
Speaker 1 strongly valuable to companies that they'll move towards it.
Speaker 2 I think it depends on what segments of the economy, because some of these are not economically driven already. They're just not efficient as sectors.
Speaker 2 But if you look at healthcare or education, everybody's seen this chart that shows a bunch of industries that have some measure of output per dollar spent.
Speaker 2 And you have increasing spend on healthcare and education and no improved improved output.
Speaker 2 And that's happened for a long time when there's increase in productivity in many other sectors. And the answer is there's no economic pressure, actually.
Speaker 3 Sure, it's regulated versus unregulated sectors effectively. And the regulation is what causes the divorce from economics.
Speaker 1 Yeah.
Speaker 1 Also, one thing that I think is very interesting is that a lot of people are in the mindset of AI being really good as an independent contributor when actually it may soon become much better at being a manager, right?
Speaker 1 And like taking a large problem, breaking it down, figuring out how to performance manage people for how they should be doing.
Speaker 1 And this ties into your point around like, what should we do with all of those unproductive employees?
Speaker 1 Because if we have like a ruthlessly rational agent that is making the decision there, it is probably going to be very different than a lot of the decisions that have been made historically.
Speaker 2 One of our companies asked recently what I would expect an assistant to do that it doesn't do today.
Speaker 1 Right.
Speaker 2 And I think the biggest thing is like, you know, if I give it enough context and some objectives that I'm trying to achieve, I'm not like a particularly organized person.
Speaker 2 I have a lot of output, I think, all things relative, but, you know, is it like perfectly prioritized and tasked out and sequenced so I'm not bottlenecked on a particular thing? No, right.
Speaker 2 And I would absolutely expect that the assistant can do that for me.
Speaker 1 Totally. Well, and it goes to the point earlier, right? Which is like.
Speaker 2 Tell me, tell me what to do for the next three years.
Speaker 1
We have these models that are like. incredibly good at math, right? Like we give them a test and they can ace the test, but they still can't do like basic personal assistant work.
Right.
Speaker 1 And I think it goes to show that there's still a lot of like research and product to be built out.
Speaker 1 And like, how do we actually bridge the gap with what's economically valuable to complete that end-to-end job that like you're willing to pay a human salary for?
Speaker 3 Do you think the models are good enough for that? There's just incremental engineering work to make it better? Or do you think it's okay?
Speaker 3 So we actually have model capabilities that you think would allow us to build certain types of tree agentic systems versus we need like that are proactive too.
Speaker 1 Actually, maybe, let me put it this way. I think with a small amount of evals for agents in various categories, the base model has like all the reasoning capabilities.
Speaker 1 And the reason you still need those evals is the models need to understand when they should be using tools in certain ways. They need to understand how to synthesize information from those tools.
Speaker 1 But it's not a reasoning problem. It's like much more this problem of like learning each company's knowledge base and like what good looks like in that role.
Speaker 1 And so there is going to be some like post-training, and I'm very bullish on RFT and everything that's going to mean.
Speaker 3 Can you say more about RFT and explain it for our audience?
Speaker 1 Yeah.
Speaker 1 So basically, everyone used to talk about fine-tuning in the context of SFT, supervised fine-tuning, where you would have inputs and outputs for a model, and the model would learn from those input-output pairs.
Speaker 1 But the main issue and supervised fine-tuning customization never really took off because it wasn't very data efficient.
Speaker 1 Like companies would create a few hundred and eventually try to scale it up to tens of thousands or hundreds of thousands of SFT pairs, but oftentimes wouldn't be able to get a lot of the capabilities that they were looking for.
Speaker 1 Whereas in reinforcement fine-tuning, you instead define the outcome that you care about.
Speaker 1 So in Sierra's case, like I was talking with them about how they define what a good customer support response would look like.
Speaker 1 In our case, we define what are the key things that you should identify as a characteristic of this candidate, whether it be that they're passionate during their interview, they demonstrate XYZ domain knowledge, or they worked on this side project that demonstrated that skill.
Speaker 1 And then you reward the model for identifying that. So you set the solution, and then the model can learn in that environment how to get really good at it.
Speaker 1 And the reason I'm so optimistic about it taking off is that it's like profoundly data efficient, right?
Speaker 1 And it finally makes sense to customize models at the application layer.
Speaker 2 And profoundly data efficient is actually like hundreds to thousands of examples, like some tenable number for an enterprise or a medium-sized business to think about versus like, I don't know, a billion tokens.
Speaker 2 Yeah. Yeah.
Speaker 1
Yeah, exactly. And so it'll be, it'll be very cool.
I think we're going to have these agents that fill all roles that employees currently fill working alongside employees.
Speaker 1
Human employees will help create the evals. I also think that like contractors in our marketplace will play a large role in that.
It will just be this like huge build out of evals to
Speaker 1 create custom agents and across every enterprise.
Speaker 2 What is most important for Mercor to get done in the next year or so?
Speaker 1 So, there's two things that we focus on as a business.
Speaker 1 And I think those will be most important for this year as well as for the next five years. The first is: how do we get all the smartest people in the world on their platform?
Speaker 1 And that ties into the supply side of our marketplace, the marketplace network effects around similar to like an Uber or Airbnb.
Speaker 1 Because if we have the best candidates, then we're able to give them job opportunities and understand what they're looking for. The second thing is predicting job performance.
Speaker 2 Are you trying to offer anything that isn't comp?
Speaker 1 Yeah, we are.
Speaker 1 So, one of the things that we realized is that the average labor marketplace has a 50-to-one ratio of supply side relative to demand side, which means the average person that applies talks to their friend who also applied, and neither of them got jobs.
Speaker 1 And it's almost just this structural part of building labor marketplaces.
Speaker 1 The way to actually scale up the labor marketplace to have hundreds of millions of the smartest people in the world on the platform is to build all of these free tools, such as AI mock interviews, AI career advice,
Speaker 1 shareable profiles for people, all of the things that just create the most magical experience possible for consumers and give that away for free because it's powered by this monetization engine on the other side of the business.
Speaker 1 And so that's a very significant focus for us.
Speaker 2 I interrupted you. You were going to talk about what else was important.
Speaker 1
Yeah. It's performance predictions.
So we get all the data back from our customers of who's doing well for what reasons.
Speaker 1 And
Speaker 1 how can we learn from all of those insights to make better predictions around who we should be hiring in the future?
Speaker 1 And that's the data flywheel that you would find in many of the most prominent companies in the world.
Speaker 1 And I think that the marketplace network effect is the more obvious one when you look at the business, but I actually believe that the data flywheel will become more important over time based on a lot of the initial results that we're seeing.
Speaker 3 How do you view the labor markets evolving over the very long term?
Speaker 1 Well, I think that The largest inefficiency in the labor market is fragmentation and that a candidate, wherever they are in the world, will apply to a dozen jobs and a company in San Francisco will consider a fraction of a percent of people in the world because it's all constrained by these manual processes for matching, right?
Speaker 1 Where they need to manually review every resume, conduct every interview, and decide who to hire.
Speaker 1 When you're able to solve this matching problem at the cost of software, it makes way for a global unified labor market that every candidate applies to and every company hires from.
Speaker 1 And I believe that that's not only the largest economic opportunity in the world, but also the most impactful impactful one.
Speaker 1 And so far as how you can find everyone the job that they're going to be passionate about and successful in.
Speaker 3 Would that include AI agents? In other words, the marketplace would be a hybrid of people and agents all competing for labor globally?
Speaker 1 I think so, because customers ultimately come with like a problem to be solved, right? And ideally, it's some coordination of how those two fit together.
Speaker 2 Given you spend all your time thinking about how to attract high skilled candidates and determine their effectiveness, like what advice would you have for
Speaker 2 people who are hiring in startups and scaling companies?
Speaker 1 Early on, it's hard to stress the importance of talent density. And just like, there's always a trade-off between hiring speed and hiring quality.
Speaker 1 And you should just, for those early employees, like always index on quality. Like you need to be patient and you need to make sure that people are extremely high caliber.
Speaker 1 When you're scaling up an org,
Speaker 1 you obviously don't want to drop those standards, but people need to be a lot more data-driven around what are the characteristics of people that actually drive the outcomes they care about.
Speaker 1 And it feels like where a lot of the problems happen is when that slips, when it's sort of like this vibes-based assessment that doesn't scale very well, where each hiring manager is doing it in a fragmented way.
Speaker 1 And it's hard to enforce those standards across the board. And so just being very disciplined around like, what are your hiring goals?
Speaker 1 What are the characteristics of people that you know are actually going to achieve the business outcomes you care about? And how do you measure those things is really important.
Speaker 3 I found that almost every great company either hires well, like what you're talking about, or fires well, which is sort of your phase two.
Speaker 3 But I think often they do that one of those things really well early.
Speaker 3
For some reason, most people don't seem to get both right early on. I don't know why it is.
I think it's almost like a founder bias or something like that.
Speaker 3 And then I feel like over time, hopefully, they pivot into both. Google was a good example of an organization that would always hire well but couldn't fire well.
Speaker 3 It took them a really, really long time to clean people out, years, like literally years. Interesting.
Speaker 3 Facebook, on the other hand, was kind of known for a more mixed early talent pool, but they were very good at removing early people who weren't performing.
Speaker 3 So I always thought that was kind of an interesting dichotomy between the two. And
Speaker 3 those were the rumors in the valley when each company was, you know, tens or low hundreds of people. I don't, you know, now obviously they're all very professionalized in terms of how they do both.
Speaker 1
They have their UBI. Yeah, exactly.
Yeah.
Speaker 1 So I thought that was kind of interesting.
Speaker 2 Yeah, I think it's like a just because I mostly think about like engineering hiring and go-to-market hiring and an investor hiring.
Speaker 2
They're all professions that have like some time scale of outcomes that isn't like an hour. Right.
And so I think you're always looking for a proxy of outcomes for these like longer outcome jobs. And
Speaker 2 I think there's like a really interesting question very related to evals and assessment of like, well, what are the proxies we're going to discover for each of these roles?
Speaker 2 Because I think it's a huge shortcut in hiring, hiring well, not necessarily firing well.
Speaker 2 If you can do references, if you can do work trials with engineers, like you actually know a lot in the first five days, 30 days of whether or not something's going to work out. Totally.
Speaker 2 And like, you know, I think we're always, I'm always looking for proxies for that.
Speaker 1 Yeah.
Speaker 1 And I think one of the like crazy things about the market is that any candidate that you do work trial with has probably done work trials with like a lot of other top companies in San Francisco, but you don't have any of the data on that.
Speaker 1 Right. And obviously there's like some interesting data, like privacy and centralization questions of like companies want that to be their proprietary knowledge.
Speaker 1 But I think that market is going to trend towards becoming a lot more efficient over time, or even the references of people, right? Of like those that you don't hire.
Speaker 1 Theoretically, it's beneficial for the top companies to understand the reasons that
Speaker 1 other companies in different markets aren't hiring specific candidates, et cetera.
Speaker 2 What do you think companies that attempted some sort of
Speaker 2 like common generic evaluation, like the hireds of the world in a previous generation, like got wrong, right?
Speaker 2 Because the theory of like, well, we should have a common application of some kind or shared assessment has existed, but not worked at scale or worked at quality.
Speaker 1 I think that LinkedIn centralizes and aggregates the very first layer of the application process: of like, what are the things that this person has done and like, who are they connected to?
Speaker 1 The challenge historically has been that the rest of the process to facilitate a transaction has not been possible to aggregate and automate.
Speaker 1 It wasn't possible to like actually record all of these interviews and like scalably conduct interviews of everyone.
Speaker 1 It wasn't possible to like, you know, get all of this like data and analyze it properly on like, what are the things that go into causing someone to perform well.
Speaker 1 And so I think there's just this like huge why now that's enabled by LMs becoming so capable so quickly.
Speaker 2 That makes sense.
Speaker 2 I think one of the theories that my partner Mike has is around the like scalability of LMs being able to interrogate humans of like the usefulness of that data in a bunch of different domains.
Speaker 2 And it would be great to see the aggregate of that for hiring.
Speaker 1 So my co-vetters and I are all teal fellows. And so we're very passionate passionate about how we could apply a lens to help identify like the next teal fellows.
Speaker 1 And so like I often wonder, imagine if you could have Peter Thiel as a heuristic interview everyone in the world when they're 18. Right.
Speaker 1 And like, and maybe he could go through and like meticulously spend time determining like, you know, who is actually going to be good at what job.
Speaker 1 Like I think we're approaching that world very quickly. It'll be fun to see how that impacts the labor market, the investing market, and everything else.
Speaker 3 That's really cool.
Speaker 1 Thanks for doing this, Brenda. Thanks for having me.
Speaker 2
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 2 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.