O3 and the Next Leap in Reasoning with OpenAI’s Eric Mitchell and Brandon McKinzie
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mckbrando | @ericmitchellai
Show Notes:
0:00 What is o3?
3:21 Reinforcement learning in o3
4:44 Unification of models
8:56 Why tool use helps test time scaling
11:10 Deep research
16:00 Future ways to interact with models
22:03 General purpose vs specialized models
25:30 Simulating AI interacting with the world
29:36 How will models advance?
Press play and read along
Transcript
Speaker 2 Hi, listeners, and welcome back to Know Priors. Today I'm speaking with Brandon McKinsey and Eric Mitchell, two of the minds behind OpenAI's O3 model.
Speaker 2 O3 is the latest in the line of reasoning models from OpenAI, super powerful with the ability to figure out what tools to use and then use them across multi-step tasks.
Speaker 2 We'll talk about how it was made, what's next, and how to reason about reasoning. Brandon and Eric, welcome to Know Priors.
Speaker 1 Thanks for having us. Yeah, thanks for having us.
Speaker 3 Do you you mind walking us through O3, what's different about it,
Speaker 3 what it was in terms of a breakthrough in terms of like, you know, a focus on reasoning and you're adding memory and other things versus this Accord foundation model, LLM, and what that is.
Speaker 1 So O3 is like our most recent model in this O-series line of models that are focused on thinking carefully before they respond. And
Speaker 1 these models are in sort of some vaguely general sense smarter than like models that don't think before they respond, you know, similarly to humans.
Speaker 1 It's easier to be more accurate if you think before you respond. I think the thing that is really exciting about O3
Speaker 1 is that not only is it just smarter if you make like an apples to apples comparison to our previous O-series models, you know, it's just better at like giving you correct answers of math problems or factual questions about the world or whatever.
Speaker 1 This is true and it's great and we, you know, will continue to train models that are smarter.
Speaker 1 But it's also very cool because it uses a lot of tools that
Speaker 1 enhance its ability to do things that are useful for you.
Speaker 1 So, yeah, like you can train a model that's really smart, but like if it can't browse the web and get up-to-date information, there's just a limitation on how much useful stuff that model can do for you.
Speaker 1 If the model can't actually write and execute code, there's just a limitation to how you know the sorts of things that an LLM can do efficiently.
Speaker 1 Whereas like a relatively simple Python program can, you know, solve a particular problem very easily.
Speaker 1 So, not only is the model, it's on its own smarter than our previous O-series models, which is great, but it's also able to use all these tools that like further enhance its abilities.
Speaker 1 And whether that's doing like research on something where you want up-to-date information, or you want the model to do some data analysis for you, or you want the model to be able to do the data analysis and then kind of review the results and adjust course as it sees fit, instead of you having to be so sort of prescriptive about like each step along the way, the model is sort of able to take these like high-level requests, like do some due diligence on this company and
Speaker 1 maybe run some reasonable like forecasting models on so-and-so thing. And then write a summary for me, the model will kind of like infer a reasonable set of actions to do on its own.
Speaker 1 So it gives you kind of like a higher level interface to doing some of these more complicated tasks.
Speaker 3 That makes sense. So it sounds like basically there's like a few different changes between your core sort of GPT models where now you have something that takes a pause to think about something.
Speaker 3 So at inference time, you know, there's more compute compute happening. And then also it can do sequential steps because it can kind of infer what are those steps and then go act on them.
Speaker 3 How did you build or train this differently from just a core foundation model? Or, you know, when you did, when you all did GPT 2.5 and 4 and all the various models that have come over time,
Speaker 3 what is different in terms of how you actually construct one of these?
Speaker 1 I guess the short answer is reinforcement learning is the biggest one. So yeah, rather than just having to predict the next token and some large pre-training corpus from
Speaker 1 everywhere essentially. Now we have a more focused goal of the model solving very difficult tasks and taking as long as it needs to do to figure out the answers to those problems.
Speaker 1 Something that's like kind of magical from a user experience for me was we've in the past for our reasoning models talked a lot about test time scaling. And I think for a lot of problems
Speaker 1 without tools, test time scaling might occasionally work, but at some point the model is just kind of ranting in its internal chain of thought.
Speaker 1 And especially for like some visual perception ones, it knows that
Speaker 1 it's not able to see the thing that it needs and it just kind of like loses its mind and goes insane. And
Speaker 1 I think tool use is a really important component now to continuing this like test time scaling. And you can feel this when you're talking to O3.
Speaker 1 At least my impression when I first started using it was
Speaker 1
The longer it thinks, like I really get the impression that I'm going to get a better result. And you can kind of watch it do really intuitive things.
And
Speaker 1 it's a very different experience, but being able to kind of trust that as you're waiting, like, it's worth the wait, and you're going to get a better result because of it.
Speaker 1 And the model's not just off doing some, you know, totally irrelevant thing.
Speaker 3 That's cool. I think in your original post about this, too, you all had a graph, which basically showed that you looked at how long it thought versus the accuracy of the result.
Speaker 3
And it was a really nice relationship. So clearly, you know, thinking more deeply about something really matters.
And it seems like
Speaker 3 in the long run, do you think there's just going to be a world world where we have sort of a split or bifurcation between models, which are sort of fast, cheap, efficient, get certain basic tasks done.
Speaker 3 And then there's another model which you upload a legal M ⁇ A folder and it takes a day to think.
Speaker 3 And it's slow and expensive, but then it produces output that would take you a team of people
Speaker 3 a month to produce. Or how do you think about the world in terms of how all this is evolving or where it's heading?
Speaker 1 I think for us, unification of our models is something that Sam has talked about publicly that we have this big crazy model switcher in ChatGPT, and there are a lot of choices. And
Speaker 1 we have
Speaker 1 a model that might be good at any particular thing that a user might want to do, but that's not that helpful if it's not easy for the user to figure out, well, which model should I use for that task?
Speaker 1 And so, yeah, making the models better able, you know, making this experience more intuitive is definitely something that is like valuable and something we're interested in doing.
Speaker 1 And that applies to this
Speaker 1 question of like, you know, are we going to have like two models that people pick between or a zillion models that people pick between? Or do we put that decision inside the model?
Speaker 1 I think everyone is going to try stuff and figure out what works well for like the problems they're interested in and like the users that they have.
Speaker 1 But yeah, I mean, that question of like, how how do you
Speaker 1 make
Speaker 1 that sort of decision be as effective, accurate, intuitive as possible is definitely top of mind.
Speaker 2 Is there a reason from a research perspective to combine reasoning with pre-trainer or try to
Speaker 2 have more control of this? Because if you just think about it from the product perspective of the end consumer dealing with chat GPT,
Speaker 2 we won't get into the naming nonsense here, but they don't care.
Speaker 1 They just want the right answer and the amount of intelligence required to get there in as little time as possible. Right.
Speaker 1 The ideal situation is it's like intuitive that like how long should you have to wait? You should have to wait as long as it takes for the model to like give you a correct answer.
Speaker 1 And
Speaker 1 I hope we can get to a place where our models have a more precise understanding of their own level of uncertainty.
Speaker 1 Because
Speaker 1 if they already know the answer, they should just kind of tell you it. And if it takes them a day to actually figure it out, then they should take a day.
Speaker 1 But you should always have a sense of like it takes exactly as long as it needs to for that current model's intelligence. And I feel like we're on the right path for that.
Speaker 2 Yeah, I wonder if there isn't a bifurcation, though, between an end-user product and a developer product, right? Because there are lots of companies that use
Speaker 2 the APIs to all of these different models for very specific tasks. And then on some of them, they might even use open source models with really cheap inference with stuff that they control more.
Speaker 1 I hope you could just kind of tell the model, like, hey, this is an API use case. And yeah, you really can't be over there thinking for like 10 minutes, we got to get an answer to the user.
Speaker 1 It'd be great if their models kind of get to be more steerable, like that as well. Yeah, I think it's just a general steerability question.
Speaker 1 Like at the end of the day, if the model's smart, you should be able to specify
Speaker 1 the context of your problem. And the model should do the right thing.
Speaker 1 There's going to be some limitations because
Speaker 1 maybe
Speaker 1 just figuring out, given your situation, what is the right thing to do might require thinking in and of itself to figure out. So it's not that you can obviously do this perfectly, but
Speaker 1 yeah, pushing
Speaker 1 some all the right parts of this into the model to make things easier for the user is like...
Speaker 1 seems is a very good goal.
Speaker 2 Can I go back to something else you said?
Speaker 2 So the first guest we ever had on the podcast was actually Noam Brown.
Speaker 1 I've heard of him.
Speaker 2 You know, two
Speaker 2
plus years ago. Yes.
Hi, Noam. It'd be great to get some intuition from you guys for why tool use helps test time scaling work much better.
Speaker 1 I can give maybe very concrete cases for like the visual reasoning side of things.
Speaker 1 There's a lot of cases where, and back to also the model being able to estimate its own uncertainty, you'll give it some kind of question about an image and the model will very transparently tell you in a standard thought, like, I don't know, I can't really see the thing you're talking about very well.
Speaker 1 Or, like, it almost knows like that its vision is not very good. And
Speaker 1 but what's kind of magical is like when you give it access to a tool, it's like, okay, well, I got to figure something out.
Speaker 1 Uh, let's see if I can like manipulate the image or crop around here or something like this. And what that means is that it's it's it's like much more productive use of tokens as it's doing that.
Speaker 1
And so, your test time scaling slope, you know, goes from something like this to something much steeper. And we've seen exactly that.
Like, the
Speaker 1 test time scaling slopes for without tool use and with tool use for visual reasoning specifically are very noticeably different. Yeah, I also say like for like writing code for something like
Speaker 1 there are a lot of things that an LLM could try to figure out on its own, but would require a lot of
Speaker 1 attempts and self-verification that you could write a very simple program to do in like a verifiable and
Speaker 1 much faster way. So
Speaker 1 I do some research on this company and use this type of valuation model to tell me
Speaker 1 what the valuation should be.
Speaker 1 You could have the model try to crank through that and fit those coefficients or whatever in its context, or you could literally just have it write the code to just do it the right way and just know what the actual answer is.
Speaker 1 And so
Speaker 1 yeah, I think part of this is you can just allocate compute a lot more efficiently because you can defer stuff that the model doesn't have comparative advantage to doing to a tool that is like really well suited to doing that thing.
Speaker 3 One of the ways I've been using some form of O3 a lot is deep research, right? I think that's basically a research analyst
Speaker 3 AI that you all have built that basically will go out, will look up things on the web, will synthesize information, we'll chart things for you. It's pretty amazing in terms of its capability set.
Speaker 3 Did you have to do anything special in terms of,
Speaker 3 you know, any form of specific reinforcement learning specifically for it to be better at that or other things that you built against it or how did you think about the data training for it the data that was used for training it like I'm just sort of curious like how that product if it all is a branch off of
Speaker 3 this and how you thought about building that specifically as part of this broader effort I think when we
Speaker 1 think about like
Speaker 1 tool use
Speaker 1 I think browsing is one of the most like natural places where you know you you think of as a starting point of like, okay, like,
Speaker 1 and it's, it's not always easy.
Speaker 1 I mean, like the initial kind of browsing that we included in GPT-4 a few years back, like it was hard to make it, you know, work in a way that felt like reliable and like useful.
Speaker 1 But, you know, in the sort of, you know, modern these days,
Speaker 1 last year,
Speaker 1 you know, two years ago is ancient history,
Speaker 1 I think it feels like a natural place to start because it's like so widely applicable to so many types of queries.
Speaker 1 Anything that is, you know, requires up-to-date information, like it should help to browse for. And so
Speaker 1 in terms of a testbed for, hey, like, does, you know, the way we're doing RL, like, does it really work? You know, can we really get the model to learn like
Speaker 1 longer time horizon kind of meaningful extended behaviors?
Speaker 1 Like, it feels like kind of a natural place to start in some ways in that it also is fairly likely to be like useful in a relatively short amount of time so it's like yeah let's let's try that um
Speaker 1 i mean you know in rl like at the end of the day you're defining an objective and uh if you have an idea for like who is going to find this most useful like you know you you might like want to tailor your the objective you know to who you expect to be using the thing what you expect they're going to want you know what is their tolerance for
Speaker 1 do they want to sit through a 30-minute rollout of deep research you know do they when they ask for a report you know do they want a page or five pages or a gazillion pages so um yeah i mean you're you're definitely you know you want to tailor things to like who you think is going to be using it i feel like there's a lot of almost like white-collar behavior work that um you or knowledge work that you all are really capturing through this sort of tooling going forward.
Speaker 3 And you mentioned software engineering is one potential area.
Speaker 3 Deep research and sort of analytical jobs is another where there's all sorts of really interesting work to be done that's super helpful in terms of augmenting what people are doing.
Speaker 3 Are there two or three other areas that you think are the most near-term interesting applications for this, whether OpenAI is doing it or others should do it aside?
Speaker 3 I'm just sort of curious how you think about the big application areas for this sort of technology.
Speaker 1 I guess my
Speaker 1 very biased one that I'm excited about is coding and also
Speaker 1 research in general, being able to improve upon the velocity that we can do research at OpenAI and others can do research when they're using our tools. I think our models are getting
Speaker 1 a lot better very quickly at being actually useful. And it seems like they're kind of reaching some kind of inflection point where
Speaker 1 they are useful enough to want to reach out to and use like multiple times a day for me at least, which wasn't the case. They're always like a little bit
Speaker 1 behind what I wanted them to be, especially when it comes to navigating and using our internal code base, which is not simple.
Speaker 1 And it's amazing to see like more recent
Speaker 1 our models actually really spending a lot of time trying to understand the questions that we ask them and coming back with things that saved me like many hours of my own time.
Speaker 3 People say that's the fastest potential bootstrap, right? In terms of each model subsequently helping to make the next model better, faster, cheaper, et cetera.
Speaker 3 And so people often argue that that's almost like a inflection point on the exponent towards super intelligence is basically this ability to use
Speaker 3 AI to build the next version of AI.
Speaker 1 Yeah. And there's so many different components of research, too.
Speaker 1 It's not just sitting off in the ivory tower thinking about things, but there's like hardware, there's various components of training and evaluation and stuff like this.
Speaker 1 And each of these can be turned into some kind of task that can be optimized and iterated over. So there's plenty of
Speaker 1 room to squeeze out improvements.
Speaker 2 We talked about browsing the web, writing code, arguably the greatest tool tool of all, right? Especially if you're trying to figure out how to spend your compute, write more efficient code.
Speaker 2 Generating images, writing text. There are certainly like trajectories of action I think are not in there yet, right? Like reliably using a sequence of business software.
Speaker 1 I'm really excited about the computer use stuff.
Speaker 1 It kind of drives me crazy in some sense that our models are not already just like on my computer all day watching what I'm doing. And well, I know that can be creepy for some people.
Speaker 1 And I think you should be able to opt out of that or have that opted out by default. I hate typing also.
Speaker 1 I wish that I could just kind of like be working on something on my computer. I hit some issue and I'm just like, you know, what am I supposed to do with this? And I can just kind of ask.
Speaker 1 I think there's tons of space for being able to improve on like how we interact with the models. And this goes back to them being able to use tools in a more intuitive way.
Speaker 1 I guess using tools closer to how we use them.
Speaker 1
It's also surprising to me how. intuitively our models do use the tools we give them access to.
It's like weirdly human-like, but I guess that's not too surprising given the data they've seen before.
Speaker 1 But yeah.
Speaker 2 I think a lot of things are weirdly human-like. Like my intuition for like, well, why is tool use so impactful to test time scale? Like, why is the combination so much better?
Speaker 1 Take any...
Speaker 1 any role.
Speaker 2 You can make a decision when you are trying to make progress against a task as to like, do I get external validation or do I sit and think really hard? Right.
Speaker 2 And usually you want to do like oneer is more efficient than the other. And it's not always just sit in a vacuum and think really hard with what you know.
Speaker 1
Yeah, absolutely. You can seek out sort of new inputs.
Like it doesn't have to be this closed system anymore.
Speaker 1 And I do feel like the closed system-ness of the models is still sort of a limitation in some ways. Like you're not, you're not necessarily like turning this.
Speaker 1 I mean, like, I think it'd be great if the model could control my computer for sure. But in some sense, it's
Speaker 1 there's a reason we don't go hog wild and say like, oh yes, here's like the keys to the kingdom. Like have at it.
Speaker 1 There are still asymmetric costs to like the time you can save and the types of errors you can make.
Speaker 1 And so we're trying to like iteratively kind of, you know, deploy these things and like try them out and figure out like
Speaker 1 where are they reliable, you know, and where are they not?
Speaker 1 Because
Speaker 1 yeah, like if you did just let the model control your computer, it could do some cool stuff. Like I have no doubt.
Speaker 1 But you know, do I trust it to like respond to all of the random emails that Brandon sends me? Actually, maybe for that task, it doesn't require that much intelligence, but
Speaker 1 more more generally like do I you know do I trust it to to do everything I'm I'm doing like you know some things and I'm sure like that set of things will be bigger tomorrow than it was yesterday
Speaker 1 but yeah I think part of this is like we limit the affordances and keep it a little bit in the like sandbox just out of caution
Speaker 1 so that you know you don't send some crazy email to your boss or you know delete all your texts or delete your hard drive or something.
Speaker 2 Is there some sort of like organizing mental model for like the tasks that one can do with
Speaker 2 increasing intelligence, test time scaling, and improved tool use? Because I look at this and I'm like, okay, well, you have complexity of task and you have time scale.
Speaker 2 Then you have like the ability to come up with these RL rewards and environments, right? Then you have like usefulness.
Speaker 2 Maybe you have some, of course, you have some intuition about like diversity and generalization across the different things you can be doing, but
Speaker 2
it seems like a very large space. And scaling RL, like new gen RL is not, it's just not obvious.
Like how, to me, it's not obvious how you do it or how you choose the path.
Speaker 2 Is there some sort of organizing framework that
Speaker 2 you guys have that you can share?
Speaker 1 I mean, I don't know if there's like one organizing framework.
Speaker 1 I think there are a few like factors at least that I think about in like the very, very grand scheme of things is like how much, like in order to solve this task, like how much uncertainty with the environment do I have to like wrestle with?
Speaker 1 Like,
Speaker 1 um, for some things where it's like, this is a purely fat, like, who was the first president of the United States?
Speaker 1 Like, there's zero like environment I need to interact with to like reach the answer to this question correctly. I just need to remember the answer and say the answer.
Speaker 1 You know, if I want you to like write some code, you know, that like solves some problem.
Speaker 1 Well, now I have to deal with a little bit of like not purely internal model stuff, but also like, okay, okay, I need to execute the code and like that code execution environment is maybe more complicated than my model can memorize internally.
Speaker 1 So I have to do like a little bit of like writing code and then executing it and making sure it does what I thought it did and then testing it and then giving it to the user.
Speaker 1 And things get like the amount of that sort of stuff outside the model that you have to like, you know, you can't just recall the answer and give it to the user.
Speaker 1 You have to like test something and you know run an experiment in the world and then wait for the result of that experiment.
Speaker 1 The more you have to do that, the more uncertain the results of those experiments.
Speaker 1 In some sense, that's one of the core attributes of what makes the tasks hard.
Speaker 1 And I think another is like how
Speaker 1 simulatable they are.
Speaker 1 Like stuff that is really bottlenecked by time, like the physical world,
Speaker 1 is also just harder than stuff that we can simulate really well.
Speaker 1 It's not a coincidence that so many people are interested in coding and coding agents and things.
Speaker 1 And that
Speaker 1 robotics is hard and
Speaker 1
it's slower. And I used to work on robotics and it's frustrating in a lot of ways.
I think both this, how much of the external environment do you have to deal with?
Speaker 1 And then how much do you have to wrestle with the unavoidable slowness of the real world are two dimensions that I sort of think about.
Speaker 3 It's super interesting because if you look at historically some of these models, one of the things that I think has continued to be really impressive is the degree to which they're generalizable.
Speaker 3 And so I think when GitHub Copilot launched, it was on Codex, which was like a specialized code model.
Speaker 3 And then eventually that just got subsumed into these more general purpose models in terms of what a lot of people are actually using for coding-related applications.
Speaker 3 How do you think about that in the context of things like robotics?
Speaker 3 There's like probably a dozen different robotics foundation model companies now.
Speaker 3 Do you think that eventually just merges into the work you're doing in terms of there's just these big general purpose models that can do all sorts of things?
Speaker 3 Or do you think there's a lot of room for these standalone other types of models over time?
Speaker 1 I will say the one thing that's always struck me as kind of funny about us doing RL is that we don't yet do it on the most like canonical RL task of robotics.
Speaker 1 And I personally don't see any reason why we couldn't have these be
Speaker 1 the same model.
Speaker 1 I think
Speaker 1 there are certain challenges with like, I don't know, do you want your
Speaker 1 RL model to be able to like generate an hour-long movie for you natively as opposed to a tool call.
Speaker 1 That's where it's probably tricky to have you have more conflict between having everything in the same set of weights. But
Speaker 1 certainly, the things you see O3 already doing in terms of
Speaker 1 exploring a picture and things like that are kind of like early signs of something like an Asian exploring an external environment. So, I don't think it sounds too far-fetched to me.
Speaker 1 Yeah, I mean, I think
Speaker 1 the thing that came up earlier of also the intelligence per cost thing,
Speaker 1 the real world is an interesting litmus test because at the end of the day, like there is a, you know, frame rate in the real world you need to live on.
Speaker 1
And it doesn't matter if you get the right answer after you think for two minutes. Like, you know, the ball is coming at you now and you have to catch it.
Gravity's not going to wait for you.
Speaker 1 So you, you, that's an extra constraint that we get to at least softly ignore when we're talking about these purely disembodied things. That's kind of.
Speaker 3 It's kind of interesting, though, because really small brains are very good at that. You know, so you look at a frog, you start looking at different organisms and you look at sort of relative compute.
Speaker 1 Yeah.
Speaker 3 And, you know, very simple systems are very good at that.
Speaker 1 Ants, you know,
Speaker 1 like,
Speaker 3 so I think that's kind of a fascinating question in terms of what's the baseline amount of capability that's actually needed for some of these real world tasks that are reasonably responsive in nature.
Speaker 1
It's really tricky with vision, too. We have, so our models have some, I think, maybe famous edge cases of where they don't do the right thing.
I think Eric probably knows where I'm going with this.
Speaker 1 I don't know if you ever asked our models to tell you what time it is on a clock.
Speaker 1 They really like the time 10.10. So yeah.
Speaker 3 It's my favorite time too.
Speaker 1 So that's usually what I tell people. It's like over 90% or something like that of all clocks on the internet are 1010.
Speaker 1 And it's because it looks like, I guess, like a happy face and it looks like nice.
Speaker 1 But anyways,
Speaker 1 what I'm getting at is
Speaker 1 our visual system was developed by interacting with the external world and having to be good at navigating things, avoiding predators.
Speaker 1 And
Speaker 1 our models have learned vision in a very different type of way. And
Speaker 1 I think we'll see a lot of really interesting things if we can get them to be kind of closing the loop by reducing their uncertainty by taking actions in the real world, just as opposed to thinking about stuff.
Speaker 2 Hey, Eric, you brought up the idea of how
Speaker 2 what in the environment can be simulated, right, as
Speaker 2 an input as to like how difficult it will be to improve on this.
Speaker 2 As you get to long-running tasks, like let's just take software engineering. Like, there is a lot of interaction that is not just me committing code continually.
Speaker 2 It's like I'm going to talk to other people about the project, in which case you then need to deal with the problem of like, can you reasonably simulate how other people are going to interact with you on the project in an environment?
Speaker 2 That seems really tricky, right?
Speaker 2 I'm not saying that, you know, O3 or whatever set of foundation models now doesn't have the intelligence to respond reasonably, but like, how do you think about that simulation being true to life, as it true to life, true to the real world
Speaker 2 as you
Speaker 2 involve human beings in an environment in theory?
Speaker 1 My spicy, I guess, take on that is like, I don't know if it's spicy, but
Speaker 1 O3 in some sense is already kind of simulating what it'd be like for a single person to do something with like a browser or something like that and i don't know train two of them uh together uh so that you'd have you know you have two people interacting with each other um and yeah there's no reason you can't scale this up so that models are are trained to be really good at cooperating with each other i mean there's a lot of already existing literature on multi-agent rl and uh yeah if if you want the model to be good at something like collaborating with a bunch of people like maybe a not too bad starting point is making it good with collaborating with other models man someone should do that yeah yeah yeah we should really start thinking thinking about that, Eric.
Speaker 2 I think it's a little bit spicy because yes, the work is going on. It is interesting to hear you think that is a useful direction.
Speaker 2 I think lots of people would still like to believe, not me, like my comment was extra good on this pull request or whatever it is, right?
Speaker 1 Okay, I could sympathize with that. Sometimes I see our models training and I'm like, oh, what are you doing? You know, like
Speaker 1 you're taking forever to figure this out. And I actually think it would be really fun if you could actually train models in an interactive way.
Speaker 1 You know, forget about just like at test time, but I think it'd be really neat to train them to do something like that, be able to like intervene when it makes sense.
Speaker 1 And yeah, just more me being able to tell the model to cut it out and like in the middle of its kind of chain of thought and
Speaker 1 it being able to learn from that on the fly, I think would be great. Yeah, I do think this is like the intersection of these two things where it's both
Speaker 1 like
Speaker 1 a point of contact with the external environment that is like, can be very high uncertainty.
Speaker 1 Like humans can be very unpredictable in some cases and it's sort of limited by the tick of time in the real world if you want to like you know deal with actual humans like humans have a fixed you know clock cycle um uh you know in their in their head um
Speaker 1 so yeah i mean this is if you you know if you want to like do this in the literal sense it's hard and so you know scaling it up and and you know making it work well is is you know it's not obvious how to do this yeah we are a super expensive tool call you know if you're a model you can either ask me you know, meatbag over here to, you know, help with something and I'll try to think really slowly.
Speaker 1 In the meantime, it could have like used browser and read like 100 papers on the topic and something like that. So it's, yeah, how do you model the trade-off there? But the human part's important.
Speaker 1 I mean, I think in any research project, like my interaction with Brandon are the hardest part of the project. You know, like writing the code is, that's the easy part.
Speaker 2 Well, and there's, there's some analog from self-driving. A lot's going to say the, you know, hanging out with me every week is the hardest part of doing this podcast, but it's my favorite part.
Speaker 1
Look at how healthy their relationship is, Eric. We need to learn from this.
No, we're honest. It's okay.
We got to work through it.
Speaker 2 In self-driving, one of the like classically hard things to do was like predict the human and the child and the dog, like agents in the environment versus
Speaker 2 like what the environment was.
Speaker 2 And so
Speaker 2 I think there's like some analogy to be drawn there.
Speaker 2 Going back to just like how you progress the O series of models from here,
Speaker 2 is it a reasonable assessment that some people have
Speaker 2 that the capabilities of the models are likely to advance in a spikier way? Because you're relying to some degree more on the creativity of research teams in making these environments and deciding
Speaker 2 how to create these evals versus like we're scaling up on existing data set in pre-training. Is that a fair contrast?
Speaker 1 Spikier, or like what's the plot here? What's the like the x-axis and the y?
Speaker 1 Domain is the x-axis and y is capability?
Speaker 2 Yes, because you're like choosing what domains you are really creating this RL loop in.
Speaker 1 I mean, I think this is a very reasonable hypothesis to
Speaker 1 hold. I think there is some like counter evidence that I think should you know be factored into people's intuitions.
Speaker 1 Like, you know, Sam tweeted an example of some creative writing from one of our models that
Speaker 1 I think was.
Speaker 1 I'm not an expert and I'm not going to say this is like, you know, publishable or like groundbreaking, but I think it probably updated some people's intuitions on like what, you know, you can train a model to do really well.
Speaker 1 And so I think there is some structural reasons why you'll have some spikiness just because like as an organization, you have to decide, like, hey, we're going to prioritize, you know, X, Y, Z stuff.
Speaker 1 And like, as the models get better, the surface area of stuff you could do with them grows faster than, you know, you can potentially say, hey, this is the niche, you know, we're going to carve out.
Speaker 1 We're going to try to do this really well. So, like, there, I think there's some reason for spikiness, but I think some people will
Speaker 1 probably go too far with this in saying, like, oh, yes, these models will only be really good at math and code. And, like, not, you know, like, everything else is like, you can't get better at them.
Speaker 1 And I, I think that is probably not the right intuition to have. Yeah.
Speaker 1 And I think probably all like major AI labs right now have some partitioning between let's just define a bunch of data distributions we want our models to be good at and then just like throw data at them.
Speaker 1 And then another set of people in this same companies is probably are probably thinking about how can you kind of lift all boats at once with some like algorithmic change. And
Speaker 1 I think, yeah, we definitely have
Speaker 1 both
Speaker 1 those types of efforts at OpenAI. And
Speaker 1 I think, especially on the data side,
Speaker 1 there are going to naturally be things that we have a lot more data of than others.
Speaker 1 But ideally, yeah, we have plenty of efforts that will not be so reliant on the exact subset of data we did RL on and it'll generalize better. I get pitched.
Speaker 2 Every week, and I bet a lot does too,
Speaker 2 a company that wants to generate data for the labs in some way.
Speaker 2 And,
Speaker 2 or it's, you know, access to human experts or whatever it is, but like, you know, there's, there's infinite variations of this.
Speaker 2 If you could wave a magic wand and have like a perfect set of data, like what would it be
Speaker 2 that you know would advance model quality today?
Speaker 1 This is a dodge, but like uncontaminated evals
Speaker 1 always super valuable. And that's data.
Speaker 1 And I mean, yeah, like you want, you know,
Speaker 1 good data to train on. And that's, of course, valuable for making the model better.
Speaker 1 But I think it is often neglected how also important it is to have high quality data, which is like a different definition of high quality when it comes to an eval.
Speaker 1 But yeah, the eval side is like often just as important because you don't
Speaker 1 you need to measure stuff.
Speaker 1 And like, as you know from, you know, trying to hire people or whatever, like evaluating the capabilities of like a general, like capable agent is really hard to do in like a rigorous way.
Speaker 1 So yeah, I think evals are a little underappreciated.
Speaker 1 That is true. Evals are, I mean, especially with some of our recent models where we've kind of run out of reliable evals to track because they kind of just solved a few of those.
Speaker 1 But on the on the training side, I think it's always valuable to have
Speaker 1 training data that is kind of at the next frontier of model capabilities.
Speaker 1 I mean, I think a lot of the things that 03 and 04 mini can already do, those types of tasks, like basic tool use, we probably aren't
Speaker 1 super in the need for new data like that.
Speaker 1 But I think it'd be hard to say no to a data set that's like a bunch of like multi-turn user interactions and some code base that's like a million lines of code that
Speaker 1 is like a two-week research task of like adding some new feature to it that requires like multiple pull requests.
Speaker 1 I mean, that means like something that was like super high-quality and has a ton of supervision signals for us to learn from. Yeah, I think that would be awesome to have.
Speaker 1 You know, I definitely wouldn't turn that down.
Speaker 2 You play with the models all the time, I assume, a lot more than average humans do. What do you do with reasoning models that you think other people don't do enough of yet?
Speaker 1 Send the same prompt many, many, many times to the model and get an intuition for the distribution of responses you can get.
Speaker 1 I have seen, it drives drives me absolutely mad when people do these comparisons on Twitter or wherever, and they're like, oh, I put the same prompt into blah, blah, and blah, blah, and this one was so much better.
Speaker 1 It's like, dude, you like,
Speaker 1 like, I mean, it was something we talked about a bit.
Speaker 1 When we were launching, it's like, yeah, O3 can do really cool things, like when it chains together a lot of tool calls.
Speaker 1 And then like sometimes for the same prompt, it won't have that, you know, moment of magic, or it will, you know, just take a little, it'll do a little less work for you. And so
Speaker 1 yeah, though, like the peak performance is really impressive, but there is a distribution of behavior.
Speaker 1 And I think people often don't appreciate that there is this distribution of outcomes when you put the same prompt in and getting intuition about that is useful.
Speaker 2 So as an end user, I do this and I also have a feature request for your friends in the product org. I'll ask, you know, Oliver or something, but it's just, I
Speaker 2 want a button where I like, assuming my rate limits or whatever support it, I want to run the prompt automatically like 100 times every time, even if it's really expensive.
Speaker 2 And then I want the model to rank them and just give me the top one and two.
Speaker 1 Interesting.
Speaker 2 And just let it be expensive.
Speaker 3 Or a synthesis across it, right?
Speaker 3 You could also synthesize the output and just see if there's some, although maybe you're then reverting to the mean in some sense relative to that distribution or something.
Speaker 3 But it seems kind of interesting, yeah.
Speaker 2 Maybe there's a good infrastructure reason you guys aren't giving us that button.
Speaker 1 Well, it's expensive, but there are.
Speaker 1
I think it's a great suggestion, yeah. Yeah, I think it's a great suggestion.
How much would you pay for that?
Speaker 2 A lot, but I'm a price-insensitive user of AI.
Speaker 1
Yeah, I see. Perfect.
Maybe there are many.
Speaker 3 You should have Sarah tear as one of your tiers.
Speaker 1
Exactly. Exactly.
Happy.
Speaker 1 I really like.
Speaker 1 sending prompts to our models that are kind of at the edge of what I expect them to be able to do, just kind of for funsies.
Speaker 1 Like a lot of times before I'm about to do some like programming tasks, I will just kind of ask the model to go see if it can figure it out. A lot of times with like no hope of it being able to do it.
Speaker 1 And indeed, sometimes it comes back and I just am pretty, like, I'm like a disappointed father. But other times it does it and it's amazing and it saves me like tons of time.
Speaker 1 So I kind of use our models almost like a background cue of work where I just, I'll just like shoot off tasks to them. And sometimes those will stick and sometimes they won't.
Speaker 1 But in either case, like it's always a good outcome if something good happens.
Speaker 3
That's cool. Yeah.
I do that just to feel better about myself when it doesn't work.
Speaker 1 Yeah, I get depressed. Yeah, I'm still providing value.
Speaker 3 When it works, I feel even worse about myself. So it's very hit or miss.
Speaker 1 Yeah.
Speaker 3 There are some differences in terms of how some of these models are trained or RL'd or effectively produced.
Speaker 3 What are some of the differences in terms of process in terms of how you approach the O-series of models versus other things that have been done at OpenAI in the past?
Speaker 1 The tools stuff was very, it was was quite the experience getting working at a large-scale setting.
Speaker 1 So you can imagine if you're doing like async RL with a bunch of tools that those are, you're just adding more and more failure points to your infrastructure.
Speaker 1 And what you do when things get evidently fail is pretty interesting, like engineering problem, but also like an RL like ML problem too. Because
Speaker 1 if you're, I don't know, if your Python tool like, you know, it goes down in the middle of the run and you're like, what do you do? Do you stop the run?
Speaker 1
Probably not. That's probably not like the most sane thing to do with that much compute.
So the question is, how do you handle that gracefully and not hurt the capabilities of the model
Speaker 1 as an unintended consequence? So there's been a lot of learnings like that of how you deal with huge infrastructure that's asynchronous for RL. RL is hard.
Speaker 2 This has been great, guys. Thank you.
Speaker 3 Yeah, thanks so much for coming.
Speaker 1
Yeah, thanks. It was fun.
Thanks for having us.
Speaker 2
Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces.
Follow Follow the show on Apple Podcasts, Spotify, or wherever you listen.
Speaker 2 That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.