Fully autonomous robots are much closer than you think – Sergey Levine
Sergey Levine, one of the world’s top robotics researchers and co-founder of Physical Intelligence, thinks we’re on the cusp of a “self-improvement flywheel” for general-purpose robots. His median estimate for when robots will be able to run households entirely autonomously? 2030.
If Sergey’s right, the world 5 years from now will be an insanely different place than it is today. This conversation focuses on understanding how we get there: we dive into foundation models for robotics, and how we scale both the data and the hardware necessary to enable a full-blown robotics explosion.
Watch on YouTube; listen on Apple Podcasts or Spotify.
Sponsors
* Labelbox provides high-quality robotics training data across a wide range of platforms and tasks. From simple object handling to complex workflows, Labelbox can get you the data you need to scale your robotics research. Learn more at labelbox.com/dwarkesh
* Hudson River Trading uses cutting-edge ML and terabytes of historical market data to predict future prices. I got to try my hand at this fascinating prediction problem with help from one of HRT’s senior researchers. If you’re curious about how it all works, go to hudson-trading.com/dwarkesh
* Gemini 2.5 Flash Image (aka nano banana) isn’t just for generating fun images — it’s also a powerful tool for restoring old photos and digitizing documents. Test it yourself in the Gemini App or in Google’s AI Studio: ai.studio/banana
Timestamps
(00:00:00) – Timeline to widely deployed autonomous robots
(00:22:12) – Why robotics will scale faster than self-driving cars
(00:32:15) – How vision-language-action models work
(00:50:26) – Improvements needed for brainlike efficiency
(01:02:48) – Learning from simulation
(01:14:08) – How much will robots speed up AI buildouts?
(01:22:54) – If hardware’s the bottleneck, does China win by default?
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Listen and follow along
Transcript
Today I'm chatting with Sergei Levin, who is a co-founder of Physical Intelligence, which is a robotics foundation's model company and also professor at UC Berkeley, and just generally one of the world's leading researchers in robotics, RL, and AI.
Sergei, thank you for coming on the podcast.
Thank you, and thank you for the kind introduction.
Let's talk about robotics.
So before I pepper you with questions, I'm wondering if you can give the audience a summary of where physical intelligence is at right now.
You guys started a year ago.
And what does the progress look like?
What are you guys working on?
Yeah, so physical intelligence aims to build robotic foundation models.
And that basically means general purpose models that could, in principle, control any robot to perform any task.
We care about this because we see this as a very fundamental aspect of the AI problem.
The robot is essentially encompassing all AI technology.
So if you can get a robot that's truly general, then you can do, you know, hopefully a large chunk of what people can do.
And where we're at right now is, I think we've kind of gotten to the point where we've built out a lot of the basics.
And I think those basics actually are pretty cool.
They work pretty well.
We can get a robot that will fold laundry and that will go into a new home and like try to clean up the kitchen.
But in my mind, what we're doing at physical intelligence right now is really the very, very early beginning.
It's just like putting in place the basic building blocks on top of which we can then tackle all these really tough problems.
And what's a year-by-year vision?
So
one year in, no, I got a chance to watch some of the robots.
And they can do pretty dexterous tasks, like folding a box using grippers.
And it's like, I don't know, it's pretty hard to fold a box, even with my hands.
If you've got to go year by year until we get to the full robotics explosion, what is happening every single year?
What is the thing that needs to be unlocked, et cetera?
Aaron Powell, Jr.: So there are a few things that we need to get right.
I mean, dexterity obviously is one of them.
And in the beginning, we really wanted to make sure that we
understand
whether the methods that we're developing have the ability to tackle the kind of intricate tasks that people can do.
As you mentioned, folding a box,
folding different articles of laundry, cleaning up a table, making a coffee, that sort of thing.
And that's good.
That works.
I think that the results we've been able to show are pretty cool.
But again, the end goal of this is not to fold a nice t-shirt.
The end goal is to just confirm our initial hypothesis that the basics are kind of solid.
But from there, there are a number of really major challenges.
And I think that
sometimes when
results get abstracted to the level of a three-minute video, someone can look at this video.
It's like, it's like, oh, that's cool.
Like, that's what they're doing.
But it's not.
Like, it's a very simple and basic version of what I think is to come.
Like, what you really want from a robot is not to tell it, like, hey, please fold my t-shirt.
What you want from a robot is to tell it, like, hey, robot, like, you're now doing all sorts of home tasks for me.
I like to have dinner made at 6 p.m.
I wake up and go to work at 7 a.m.
I'd like, you know, I'd like to do my laundry on Saturday, so make sure that that's ready.
This and this and this.
And by the way, check in with me every Monday to see
what I want you to pick up when you do the shopping.
That's the prompt.
And then the robot should go and do this for
six months, a year.
That's the duration of the task.
So it's
ultimately, if this stuff is successful, it should be a lot bigger.
And it should have that ability to learn continuously.
It should have the understanding of the physical world, the common sense, the ability to go in and pull in more information if it needs it.
Like if I ask it, like, hey, tonight, like, you know,
can you make me this type of salad?
It's like, okay, you should, like, figure out what that entails, like, look it up, go and buy the ingredients.
So, there's a lot that goes into this.
It requires common sense.
It requires understanding that there are certain edge cases that you need to handle intelligently, cases where you need to think harder.
It requires the ability to improve continuously.
It requires understanding safety, being reliable at the right time, being able to fix your mistakes when you do make those mistakes.
So, there's a lot more that goes into this.
But the principles there are you need to leverage prior knowledge and you need to have the right representations.
So this grand vision, what year, if you had to give an estimate, median estimate.
Yeah.
Or 25 percentile, 50, 75.
I think it's something where it's not going to be a case where we develop everything in the laboratory and then it's done.
And then, you know, come 20, 30 something, you get a robot in a box.
I think it'll be the same as what we've seen with AI assistance, that once we reach some basic level of competence where the robot is delivering something useful, it'll go out there in the world.
The cool thing is that once it's out there in the world, they can collect experience and leverage that experience to get better.
So, to me,
what I tend to think about a lot in terms of timelines is not the date when it will be done, but the date when the flywheel starts, basically.
Okay, so when does the flywheel start?
I think that could be very soon.
And I think there's some decisions to be made.
The trade-off there is the more narrow you scope the thing, the earlier you can get it out into the real world.
But soon as in, like, this is something we're already exploring.
We're already trying to figure out like what are like the real things this thing can do that could allow us to start spinning the flywheel.
But I think in terms of like stuff that you would actually care about, that you would want to see.
So I don't know, but I think that single-digit years is very realistic.
I'm really hoping it'll be more like one or two before something is like actually out there, but it's hard to say.
And something being out there means what?
Like what is out there?
It means that there is a robot that does a thing that you actually care about, that you want done, and it does so competently enough to
actually do it for real, for real people that want it done.
We already have LLMs which are broadly deployed, and that hasn't resulted in some sort of flywheel.
At least not some obvious flywheel for the model companies where now Claude is learning how to do every single job in the economy, or GPT is learning how to do every single job in the economy.
So
why doesn't that flywheel work for LLMs?
Well,
I think it's actually very close to working.
And and
I am like 100% certain that many organizations are working on exactly this.
In fact, arguably there is already a flywheel in the sense that
not an automated flywheel, but a human-the loop flywheel where everybody who's deploying an LLM is, of course, going to look at what it's doing, and it's going to use that to then modify its behavior.
It's complex because
it comes back to this question of representations and figuring out the right way to derive supervision signals and ground those supervision signals in the behavior of the system so that it actually improves on what you want.
And I don't think that's like a profoundly impossible problem.
It's just something where the details get like pretty gnarly and challenges with algorithms and stability become pretty complex.
So it's just something that's taken a while for the community collectively to get their hands around.
Do you think it'll be easier for robotics or just that like this, the state of this kind of
techniques to label data that you collect out in the world and use it as a reward will just the sort of like
the whole wave will rise and robotics will rise as reals or is there some reason I think robotics will be will benefit more from this yeah I don't think there's like a profound reason why robotics is that different but there are a few small differences that I think make things a little bit more manageable so especially if you have a robot that's doing something in cooperation with people whether it's a person that's supervising it or directing it like there are very natural sources of supervision and there's a there's a big incentive for the person to provide the assistance that will make things succeed There are a lot of dynamics where you can make mistakes and recover from those mistakes and then reflect back on what happened and avoid that mistake in the future.
And I think that when you're doing physical things in the real world, that kind of stuff just happens more often than it does if you're like an AI assistant answering a question.
Like if you answer a question and you just answered it wrong, it's like, well, it's not like you can just go back and like...
tweak a few things like the person you told the answer to might not even know that it's wrong right whereas if you're like folding the t-shirt and you messed up a little bit like yeah it's pretty obvious like you can reflect on that figure out what happened and do it better next time.
Yeah.
So okay, in one year, we have robots which are like doing some useful things.
Maybe if you have some like
relatively simple, like loopy process,
they can do it for you.
Just like you got to keep folding like thousands of boxes or something.
But then there's some flywheel, dot, dot, dot.
There's some machine which will like just run my house for me as well as a human housekeeper would.
What is the gap between this thing, which will be deployed in a year that starts the flywheel flywheel, and this thing, which is like a fully autonomous housekeeper?
Well, I think it's actually not that different than what we've seen with LLMs in some ways, that it's a matter of scope.
If you think about coding assistants, right?
Like initially,
the best tools for coding, they could do like a little bit of completion.
You give them a function signature, and they'll try their best to type out the whole function, and they'll maybe get half of it right.
And as that stuff progresses, then you're willing to give these things a lot more agency so that the very best coding assistants now, like if you have if you're doing something relatively formulaic, maybe it can like put together
most of a PR for you for something fairly accessible.
So I think it'll be the same thing.
That we'll see an increase in the scope that we're giving, that we're willing to give to the robots as they get better and better.
Where initially the scope might be like, there is a particular thing you do, like
you're making the coffee or something.
Whereas as they get more capable, as their ability to have common sense and a broader repertoire of tasks increases, then we'll give them greater scope.
Now you're running the whole coffee shop.
I get that there's a spectrum and I get that there won't be a specific moment that feels like we've achieved it.
But you've got to give a year in which
your median estimate of when that happens.
Aaron Powell, I mean, my sense there too is that this is probably a single-digit thing rather than a double-digit thing.
But the reason it's so hard to really pin down is because, as with all research, it does depend on figuring out a few question marks.
And I think my answer in terms of the nature of those question marks is I don't think these are things that require profoundly, deeply different ideas, but it does require the right synthesis of the kinds of things that we already know.
And sometimes synthesis, to be clear, is just as difficult as coming up with profoundly new stuff, right?
So I think it's intellectually a very deep and profound problem, and figuring that out is going to be very exciting.
But
I think we kind of like know like roughly the puzzle pieces and it's something that we need to work on.
And I think if we work on it and we're a bit lucky lucky and everything kind of goes as planned, I think single-digit is recently.
I mean I'm just going to do binary search until I get a year.
Okay, so it's less than 10 years, so more than five years?
Your median estimate.
I know it's like a different.
I think five is a good median.
Okay.
Five years.
So if you can fully autonomously run a house, then I think you've like it can fully autonomously do most blue-collar work.
So your estimate is in five years, it should be able to do most like blue-collar work in the economy.
So I think there's a nuance here and the nuance is it becomes more obvious if we consider the analogy to the coding assistants, right?
It's not like
the nature of coding assistants today is that there's a switch that flips and suddenly instead of writing software,
suddenly all software engineers get fired and everyone's using LMs for everything.
And that actually makes a lot of sense that the biggest gain in productivity comes from experts, which is software engineers,
whose productivity is now augmented by these really powerful tools.
I mean, separate from the question of whether people will get fired or not,
a different question is just like, what will the economic impact be in five years?
The reason I'm curious about this is with LLMs,
the relationship between the revenues for these models to their inherent, their seeming capability has been sort of mysterious in the sense that like you have something which feels like AGI.
You can have a conversation with it, really like is like, you know, like passes the scoring test.
It really feels like it can do all this knowledge work.
It's obviously doing a bunch of coding, et cetera.
But then the revenues for these AI companies are on like cumulatively on the order of like $20, $30 billion
per year.
And that's so much less than all knowledge work, which is $30, $40 trillion.
So in five years, are we in a similar situation that LLMs are in now?
Or is it more like we have robots deployed everywhere and they're actually doing a whole bunch of real work, et cetera?
Aaron Powell, it's a very subtle question.
I think what it probably will come down to is this question of scope, right?
Like the reason that LLMs aren't doing all software engineering is because they're good within a certain scope, but there's limits to that.
And those limits are increasing, to be clear, every year.
And I think that there's no reason that we wouldn't see the same kind of thing with robots.
The scope will have to start out small because there will be certain things that
these systems can do very well and certain other things where more human oversight is really important.
And the scope will grow, and what that will translate into is increased productivity.
And some of that productivity will come from
the robots themselves being valuable, and some of it will come from the people using the robots are now more productive in the network.
Just like wearing gloves increases productivity, or like, I don't know.
But then it's like you want to understand something which like increases productivity a hundredfold versus like,
you know, wearing glasses
or something which has like a small increase so robots already increase productivity for uh workers right um where llms are right now in terms of the share of knowledge work they can do which is it's i guess probably like one one thousandth of the knowledge work that happens in the economy llms are doing at least in terms of revenue um
Are you saying like that fraction will be possible for robots, but for physical work in five years?
That's a very hard question to answer.
I think
I'm probably not prepared to tell you what percentage of all labor work can be done by robots, because I don't think right now off the cuff I have a sufficient understanding of what's involved in
that big of a cross-section of all physical labor.
I think what I can tell you is this, that I think it's much easier to get effective systems rolled out gradually in a human-in-the-loop setup.
And again, I think this is exactly what we've seen with coding systems.
And I think we'll see the same thing with automation, where
basically
robot plus human is much better than just human or just robot.
And that just makes total sense.
It also makes it much easier to get all the technology bootstrap because when it's robot plus human, now there's a lot more potential for the robot to actually learn on the job, acquire new skills.
It's just like, you know, it.
Because the human can label what's happening.
And also because the human can help.
The human can give hints.
You know, let me tell you this story.
Like,
when we were working on the PIO Pio 5 project, this was the paper that we released last April.
We initially controlled our robots with teleoperation in a variety of different settings.
And then at some point, we actually realized that we can actually make significant headway once the model was good enough by supervising it not just with low-level actions, but actually literally instructing it through language.
Now, you need a certain level of competence before you can do that, but once you have that level of competence, just standing there and telling the robot, okay, now pick up the cup, put the cup in the sink,
put the dish in the sink, just with words, already actually gives the robot information that it can use to get better.
Now, imagine what this implies for the human plus robot dynamic.
Like now,
basically learning is not, for these systems, is not just learning from raw actions, it's also learning from words, eventually be learning from observing what people do, from the kind of natural feedback that you receive when you're doing a job together with somebody else.
And
this is also the kind of stuff where that prior knowledge that comes from these big models is tremendously valuable because that lets you understand that interaction dynamic.
So I think that there's a lot of potential for these kind of human plus robot
deployments to make the model better.
Interesting.
So I got to go to LevoBox and see the robotic setup and try operating some of the robots myself.
So the thing is, like these triggers, be very mindful of pressing them and don't do some like very fast movements.
Yeah.
Keep it like out of here.
Kind of.
So do I need to keep holding it?
Go ahead.
Sorry, okay.
That's what I and don't move it very fast because he can get hurt actually.
Yeah, yeah, okay.
Okay, so operating ended up being a bit harder than I anticipated.
But I did get to see the labelbox team rip through a bunch of tasks.
I also got to see the output data that labs actually have to use to train their robots and ask Manu, LabelBox CEO, about how all this is packaged together.
So what you're looking at is actually the final output that is then delivered to the labs, which then they use to train the models.
And so you can see on the left the visualization of the movements of the robot, including its 3D model and so forth.
And on the right, you see all the camera streams synchronized with the configuration.
LabelBox can get you millions of episodes of robotics data for every single robotics platform and subtask that you want to train on.
And if you reach out through labelbox.com/slash floor cache, Manu will be very happy with me.
In terms of robotics progress, why won't it be like self-driving cars where we, you know, it's been more than 10 years since Google launched its,
wasn't it 2009 that they launched a self-driving car initiative?
And then I remember when I was a teenager, like watching demos where we would go buy a Taco Bell
and drive back.
And only now do we have them actually deployed.
And even then, you know, they may make mistakes, et cetera.
And so maybe it'll be many more years before most of the cars are self-driving.
So why won't robotics, you know, you're saying five years to this like quite robust thing, but actually it'll just feel like 20 years of just like,
once we get the cool demo in five years, then it'll be another 10 years before like we have the Waymo and the Tesla FSD working.
Yeah, that's a really good question.
So one of the big things that is different now than it was in 2009 actually has to do with
the technology for machine learning systems that understand the world around them.
Principally, for autonomous driving, this is perception.
For robots, it can mean a few other things as well.
And perception certainly was not in a good place in 2009.
The trouble with perception is that it's one of those things where you can nail a really good demo with a somewhat engineered system, but hit a brick wall when you try to generalize it.
Now, at this point in 2025, we have much better technology for generalizable and robust perception systems, and more generally, generalizable and robust systems
for understanding the world around us.
Like when you say say that the system is scalable, in machine learning, scalable really means generalizable.
So that gives us a much better starting point today.
So that's not an argument about robotics being easier than autonomous driving.
It's just an argument for 2025 being a better year than 2009.
But there's also other things about robotics that are a bit different than driving.
Like in some ways, robotic manipulation is a much, much harder problem.
But in other ways,
it's a problem space where it's easier to get rolling, to start that flywheel with a more limited scope.
So to give you an example, if you're learning how to drive, you would probably be pretty crazy to learn how to drive on your own without somebody helping you.
Like you would not trust
your teenage child to learn to drive just on their own, just drop them in the car and say, like, go for it.
And that's like
a 16-year-old who's had a significant amount of time to learn about the world.
He would never even dream of putting a five-year-old in a car and telling him to get started.
But if you want somebody to like clean the dishes, like dishes can break too, but you would probably be okay with a child trying to do the dishes without somebody constantly like, you know, sitting next to them
with a break, so to speak.
So
for a lot of tasks that we want to do with robotic manipulation, there's potential to make mistakes and correct those mistakes.
And when you make a mistake and correct it, well, first you've achieved the task because you've corrected, but you've also gained knowledge that allows you to avoid that mistake in the future.
With driving, because of the dynamics of how it's set up, it's very hard to make a mistake, correct it, and then learn from it because the mistakes themselves have significant ramifications.
Now, not all manipulation tasks are like that.
There are truly some like very safety-critical stuff.
And this is where the next thing comes in, which is common sense.
Common sense, meaning the ability to make inferences about what might happen that are reasonable guesses, but that do not require you to experience that mistake and learn from it in advance.
That's tremendously important.
And that's something that we basically had no idea how to do about five years ago.
But now
we can actually use LLMs and VLMs, ask them questions, and they will make reasonable guesses.
Like they will not give you expert behavior, but you can say like, hey, there's a sign that says slippery floor.
Like what's going to happen when I walk over that?
It's kind of pretty obvious, right?
And no autonomous car in 2009 would have been able to answer that question.
So common sense plus the ability to make mistakes and correct those mistakes, like that's sounding like
an awful lot like what a person does when they're trying to learn something.
All of that doesn't make robotic manipulation easy necessarily, but it allows us to get started with a smaller scope and then grow from there.
So for years,
using, I mean, not since 2009, but we've had lots of video data, language data, and transformers for five, seven, eight years.
And lots of companies have tried to build transformer-based robots with lots of training data, including Google, Meta, et cetera.
And what is the reason that they've been hitting roadblocks?
What has changed now?
Yeah, that's a really good question.
So I'll start out with
maybe a slight modification to your comment is I think they've made a lot of progress.
And in some ways, a lot of the work that we're doing now at Physical Intelligence is built on the backs of lots of other great work that was done, for example, at Google.
Like many of us were actually at Google before.
We were involved in some of that work.
Some of it is work that we're drawing on that others did.
So there's definitely been a lot of progress there.
But
to make robotic foundation models really work, it's not just a laboratory science kind of experiment.
It also requires kind of industrial scale
building effort.
It's more like the Apollo program than it is like a science experiment.
And
the excellent research that was done in the past in industrial research labs, and I I know I was involved in much of that,
was very much framed as a fundamental research effort.
And that's good.
Like, the fundamental research is really important, but it's not enough by itself.
You need the fundamental research, and you also need the impetus to make it real.
And make it real means like actually put the robots out there, get data that is representative of the kind of tasks that they want to do in the real world, get that data at scale, build out the systems, get all that stuff right.
And that requires a degree of focus,
a singular focus on really nailing the robotic foundation model for its own sake, not just as a way to do more science, not just as a way to publish a paper, and not just as a way to kind of like
have
a research lab.
What is preventing you now from continuing scaling that data even more?
If data is a big bottleneck, why can't you just increase the size of your office 100x, have 100x more operators who are operating these robots and collecting more data.
Yeah, why not ramp it up immediately 100x more?
Yeah, that's a really good question.
So the challenge here is in understanding which axes of scale contributes to which axis of capability.
So if we want to expand capability horizontally, meaning like the robot knows how to do 10 things now and I'd like it to do 100 things later,
you know,
that can be addressed by just directly horizontally scaling what we already have.
But
we want to get robots to a level of capability where they can do
practically useful things in the real world, and that requires expanding along other axes too.
It requires, for example, getting to very high robustness.
It requires getting them to perform tasks very efficiently, quickly.
It requires them to recognize edge cases and respond intelligently.
And those things, I think, can also be addressed with scaling, but we have to identify the right axes for that, which means figuring out what kind of data to collect, what settings to collect it in, what kind of methods consume that data, how those methods work.
So, answering those questions more thoroughly will give us greater clarity on the axes,
on those dependent variables, on the things that we need to scale.
And we don't fully know right now what that will look like.
I think we'll figure it out pretty soon.
It's something we'll work on actively.
But we want to really get that right so that when we do scale it up, it'll directly translate into capabilities that are very relevant to practical use.
Just to give an order of magnitude,
how does the amount of data you have collected compare to internet scale pre-training data?
And I know it's hard to do like a token-by-token count because, yeah,
how does video information compare to internet information, et cetera?
But like
using your reasonable estimates, what fraction of.
That's right.
It's very hard to do because robotic experience consists of time steps that are very correlated with each other.
The raw byte representation is enormous, but probably the information density is comparatively low.
Maybe a better comparison is to the data sets that are used for multimodal training.
And there,
I believe last time we did that count, it was like between one and two orders of magnitude.
Aaron Powell, the vision you have of robotics will not be possible until you collect like what, 100x, 1000x, more data?
Well, that's the thing, that we don't know that.
It's certainly very reasonable to infer that like you know, robotics is a tough problem
and probably it requires
as much experience as the language stuff.
But because we don't know the answer to that, to me, a much more useful way to think about it is not
how much data do we need to get before we're fully done, but how much data do we need to get before we can get started?
Meaning, before we can get
a data flywheel that represents a self-sustaining and ever-growing data collection.
Sustaining?
This is just like learning on the job, or do you have something else in mind?
Learning on the job or acquiring data in a way that the process of acquisition of that data itself is useful and valuable.
I see.
Like just some kind of RL.
Like doing something like actually real.
Yeah.
I mean, ideally, I would like it to be RL because you can get away with the robot acting autonomously,
which is easier.
But it's not out of the question that you can have mixed autonomy.
As I mentioned before, robots can learn from all sorts of other signals.
I described how we can have a robot that learns from a person talking to it.
So there's a lot of middle ground in between fully teleoperated robots and fully autonomous robots.
Yeah.
Okay, and how does the pi model work?
Yeah, so the current model that we have basically is a vision language model that has been adapted for motor control.
So
to give you a little bit of like a fanciful brain analogy, a VLM, a vision language model, is basically an LLM that has had a little like pseudo-visual cortex grafted to it, a vision encoder.
So our models, they have a vision encoder, but they also have an action expert, an action decoder, essentially.
So it has like a little visual cortex and notionally a little motor cortex.
And the way that the model actually makes decisions is it reads in the sensory information from the robot.
It does some internal processing.
And that could involve actually outputting intermediate steps.
Like you might tell it to clean up the kitchen.
And it might think to itself, like, hey, to clean up the kitchen, I need to pick up the dish, and I need to pick up the sponge, and I need to put this and this.
And then eventually it works its way through that chain of thought generation down to the action expert, which actually produces continuous actions.
And that has to be a different module because the actions are continuous, they're high frequency, so they have a different data format than text tokens.
But structurally, it's still an end-to-end transformer.
And roughly speaking, technically, it corresponds to a kind of mixture of experts architecture.
And what is actually happening is that it's like predicting I should do X thing.
then it's like there's an image token, then some action tokens, like what it actually ends up doing, and then more image, more text description,
more action tokens.
Basically, I'm looking at what stream is going on.
That's right.
With the exception that the actions are actually not represented as discrete tokens, it actually uses a flow matching kind of diffusion because they're continuous and you need to be very precise with your actions for dexterous control.
I find it super interesting that so you are, I think you're using the open source Gemma model, which is like Google's LLM that they release open source and then adding this action expert on top.
And I find it super interesting that the progress in different areas of AI is
just based on
not only the same same techniques, but literally the same model that you can just use an open source LLM and then add this action expert on top.
It is just notable that, like,
you naively might think that, oh, there's like separate area of research, which is robotics, and there's a separate area of research called LLMs and natural language processing.
And no, it's like, it's literally the same.
It's like the considerations are the same,
the architectures are the same, even the weights are the same.
I know you do more training on top of these open source models, but that I find super interesting.
Yeah, so one theme here that
I think is important to keep in mind is that the reason that those building blocks are so valuable is because
the AI community has gotten a lot better at leveraging prior knowledge.
And a lot of what we're getting from the pre-trained LLMs and VLMs is prior knowledge about the world.
And it's kind of like, it's a little bit abstracted knowledge.
It's like, you know, you can identify objects, you can figure out roughly where things are in image, that sort of thing.
But I think if I had to summarize in one sentence the big benefit that recent innovations in AI give to robotics, it's really the ability to leverage prior knowledge.
And I think the fact that the model is the same model, that's kind of always been the case in deep learning, but it's that ability to pull in that prior knowledge, that abstract knowledge
that can come from many different sources.
That's really powerful.
Today I'm here with Mark, who is a senior researcher at Hudson River Trading.
He has prepared for us a big data set of market prices and historical market data.
And we're going to try to figure out what's going on and whether we can predict future prices from historical market data market let's dig in happy to do it so it sounds like the first fun thing to do is probably to start looking at what an order book actually looks like yeah i think so so i've given you like real order book data that is snapshots of the top five levels of the order book both on the bid and ask side for a couple of different tech stocks nvidia tesla amd etc what is the shape of the prediction are we predicting why not you uh take the data frame, look at its y values, and just kind of like histogram it?
They are centered at zero.
They're roughly centered at zero.
But target of what exactly?
So these things are changes in the mid price from like now to some short period of time in the future.
This is actually quite interesting.
It's like a mystery to solve.
And each one of these can be like a sizable chunk of time for a researcher.
If this sounds interesting to you, you should consider working at Hudson River Trading.
Mark, where can people learn more?
They can learn more at hudson-trading.com/slash door cash.
Amazing.
I was talking to
this researcher,
Sander, at GDM, and he works on video and audio models.
And he made the interesting point that the reason, in his view, we aren't seeing that much transfer learning between different modalities, that is to say, like training a language model on video and images, doesn't seem to necessarily make it that much better at textual questions and tasks, is that images are represented at a different semantic level than text.
And so his argument is that text has this high-level semantic representation within the model, whereas images and videos are just like compressed pixels.
There's not really a semantic
when they're embedded,
they don't represent some like high-level semantic information.
They're just like compressed pixels.
And therefore,
there's no transfer learning at the level at which they're going through the model.
And obviously, this is super relevant to the work you're doing because your hope is that by training the model both on the visual data that the robot sees, visual data generally, maybe even in like from YouTube or whatever, eventually, plus like language information, plus action information from the robot itself, some of that, all of this together will like make it generally robust.
And then you had a really interesting blog post about like why video models aren't as robust as language models.
Sorry, this is not a super well-formed question.
I just wanted to do a reaction.
What's up with that?
Yeah, yeah.
Yeah, so I have maybe two things I can say there.
I have some like bad news and some good news.
So the bad news is what you're saying is really getting at
the core of a long-running challenge with
video and image generation models.
In some ways, the idea of getting intelligent systems by predicting video is even older than the idea of getting intelligent systems by predicting text.
But the text stuff turned into practically useful things
earlier than the video stuff did.
I mean, the video stuff is great.
Like you can generate cool videos, and I think that the work there that's been done recently is amazing,
but it's not like just generating videos and images has already resulted in systems that have this kind of like deep understanding of the world where you can ask them to do stuff beyond just generating more images and videos.
Whereas with language, clearly it hasn't.
I think that this point about representations is really key to it.
One way we can think about it is this, that
if you
imagine pointing a camera outside this building, there's the sky, there's the clouds are moving around, the water, cars driving around, people.
If you want to predict everything that will happen in the future, you can do so in many different ways.
You can say, okay, there's people around, so let me get really good at understanding the psychology of how people behave in crowds and predict the pedestrians.
But you could also say, well, there's clouds moving around.
Let me understand everything about water molecules and ice particles in the air.
And you could go super deep on that.
If you want to fully understand
down to the subatomic level, everything that's going on, like as a person, you could spend like decades just thinking about that and you'll never even get to the pedestrians or the water, right?
So if you want to really predict everything that's going on in that scene, there's just so much stuff that even if you're doing a really great job and capturing like 100% of something, by the time you get to everything else, like, you know.
Ages will have passed.
Whereas with text, it's already been abstraction to those bits that we as humans care about.
So the representations are already there, and they're not just good representations.
They actually focus in on what really matters.
Okay, so that's the bad news.
Here's the good news.
The good news is that we don't have to just get everything out of like point a camera outside this building.
Because when you have a robot, that robot is actually trying to do a job.
So it has
a purpose.
Yeah.
And its perception is in service to fulfilling that purpose.
And that is like a really great focusing factor.
We know that for people, this really matters.
Like literally what you see is affected by what you're trying to do.
There's been no shortage of psychology experiments showing that people have almost a shocking degree of tunnel vision where they will literally not see things right in front of their eyes if it's not relevant to what they're trying to achieve.
And that is tremendously powerful.
Like there must be a reason why people do that because certainly if you're out in the jungle, seeing more is better than seeing less.
So if you have that powerful focusing mechanism, it must be darn important for getting you to achieve your goal.
And I think robots will have that focusing mechanism because they're trying to achieve a goal.
By the way, the fact that video models aren't as robust, is that
bearish for robotics?
Because it will,
so much of the data you will have to use will not be, I guess some of you're saying a lot of it will be labeled, but like ideally you just want to be able to like throw all of everything on YouTube, every video we've ever recorded, and have it learn how the physical world works and how to like move about, et cetera.
Just see humans performing tasks and learn from that.
But if, yeah, I guess you're saying like it's hard to learn just from that and it actually like needs to practice a task itself.
Well, let me put it this way.
Like let's say that I
gave you lots of videotapes or lots of recordings of different sporting events and gave you a year to just watch sports.
And then after that year, I told you, okay, now your job, you're going to be playing tennis.
Yeah.
Okay, that's like, that's pretty dumb, right?
Whereas if I told you first, like, you're going to be playing tennis
and then I let you study up, right?
Like now you really know what you're looking for.
So I think that actually
Like there's a there's a very real challenge here.
I don't want to understate the challenge, but I do think that there's also a lot of potential for foundation models that are embodied, that learn from interaction, from controlling robotic systems, to actually be better at absorbing the other data sources because they know what they're trying to do.
I don't think that that by itself is like a silver bullet.
I don't think it solves everything, but I think that it does help a lot.
And I think that
we've already seen the beginnings of that, where we can see that including web data in training for robots really does help with generalization.
And I actually have the suspicion that in the long run, it'll make it easier to use those sources of data that have been tricky to use up until now.
Famously, LLMs have all these immersion capabilities that were never engineered in because somewhere in Internet text is the data to train it, to give it the knowledge to do a certain kind of thing.
With robots, it seems like you are collecting all the data manually.
So there won't be this mysterious new capability that is somewhere in the data set that you haven't purposefully collected, which seems like it should make it even harder to then have
robust distribution kind of capabilities.
And so I wonder if the trek over the next five, 10 years will just be like
each subtask, you have to give it thousands of episodes.
And then it's very hard to actually automate much work just by doing subtasks.
So if you look, think about what a barista does, what a waiter does, what a chef does,
very little of it involves just like sitting at one station and like doing stuff right.
Just like you got to move around, you got to restock, you got to fix the machine or et cetera,
go between like the counter and the cashier and the machine, et cetera.
So yeah,
will there just be this long tail of things that you had to keep, the skills you had to keep, like adding episodes for manually and labeling and seeing how well they did, et cetera?
Or is there some reason to think that
it will progress
more generally than that?
Yeah.
So there's a subtlety here.
Emerging capabilities don't just come from the fact that internet Internet data has a lot of stuff in it.
They also come from the fact that generalization, once it reaches a certain level, becomes compositional.
There is
a cute example that one of my students really liked to use in some of his presentations, which is
You know what International Phonetic Alphabet is?
No.
IPA.
So if you look in a dictionary, they'll have the pronunciation of a word written in like kind of funny letters.
That's basically International Phonetic Alphabet.
So it's an alphabet that is pretty much exclusively used for writing down pronunciations of individual words in dictionaries.
And you can ask an LLM to write you a recipe for like making some meal in International Phonetic Alphabet, and it will do it.
And that's like, holy crap, like that is definitely not something
that it has ever seen because IPA is only ever used for writing down pronunciations of individual words.
So that's compositional generalization.
It's putting together things you've seen like that in new ways.
And it's like, you know, arguably there's nothing like profoundly new here because like, yes, you've seen different words written in that way, but you've figured out that now you can compose the words in this other language the same way that you've composed words in English.
So
that's actually where the emergent capabilities come from.
And
because of this, in principle, if we have a sufficient diversity of behaviors, the model should figure out that those behaviors can be composed in new ways as the situation calls for it.
And we've actually seen things, even with our current models, which I should say that I think they're in the grand scheme of things, like looking back five years from now, we'll probably think that these are tiny in scale.
But we've already seen what I would call emergent capabilities.
When we were playing around with some of our laundry folding policies, actually we discovered this by accident.
The robot accidentally picked up two t-shirts out of the bin instead of one, starts folding the first one, the other one gets in the way, picks up the other one, throws it back in the bin.
And we're like,
we didn't know it would do that.
Like, holy crap.
And then we try to play around with it.
And it's like, yep, it does that every time.
Like, you can drop in, you know, it's doing its work, drop something else on the table, just pick it up, put it back.
Right?
Okay, that's cool.
Shopping bag, it starts putting things in the shopping bag, the shopping bag tips over, picks it back up, and stands it upright.
We didn't tell anybody to collect data for that.
I'm sure somebody accidentally at some point or maybe intentionally picked up the shopping bag, but it's just...
You have this kind of compositionality that emerges when you do learning at scale.
And that's really where all these remarkable capabilities come from.
And now you put that together with language, you put that together with all sorts of chain of thought reasoning, and there's a lot of potential for the model to compose things in new ways.
Right.
I had an example like this when I got got a tour of the robots, by the way, at your office.
So it was folding shorts and I don't know if there was an episode like this in the
training set, but just for fun, I took one of the shorts and turned it inside out.
And then it was able to understand that it first needed to get...
So first of all, the grippers are just like this, like two limbs, or just like a posable finger and thumb like thing.
And it's actually shocking how much you can do with just that.
Yeah, I'd understood that it first needed to fold it inside out before folding it correctly.
I mean, what's especially surprising about that is it seems like this model only has like one second of context.
So as compared to these language models, which can often see the entire code base and they're like observing hundreds of thousands of tokens and thinking about them before outputting and they're observing their own train of thought for thousands of tokens before making a plan about how to code something up.
Your model is like seeing one image of what happened in the last second and it vaguely knows like it's supposed to fold this short um and it's seeing like the image of what's happened in the last second and i guess it works it's like crazy that it like no it will just see the last thing that happened and then keep executing on the plan so fold it inside out then fold it correctly but it's shocking that a second of context is enough to execute on a minute-long task.
Yeah, I'm curious why you made that choice in the first place and why it's possible to actually do tasks.
If a human could only think I had like a second of memory and had to like do physical work, I feel like that would just be impossible.
Yeah.
I mean, it's not that there's something good about having less memory, to be clear.
Like I think that adding memory, adding longer context, all that stuff, adding higher resolution images, I think those things will make the model better.
But the reason why it's not the most important thing for the kind of skills that you saw when you visited us,
at some level, I think it comes back to Morovik's paradox.
So Morovik's paradox is basically that it's like, you know, if you know one thing about, if you want to know one thing about robotics, it's like that's the thing.
Morovik's paradox says that basically in AI, the easy things are hard and the hard things are easy, meaning like the things that we take for granted, like picking up objects, seeing, you know, perceiving the world, all that stuff, those are all the hard problems in AI.
And the things that we find challenging, like playing chess and doing calculus, actually are often the easier problems.
And I think this memory stuff is actually Morovik's paradox in disguise, where we think that the cognitively demanding tasks that we do, that we find hard, that kind of cause us to think like, oh man, I'm sweating, I'm working so hard, those are the ones that require us to keep lots of stuff in memory, lots of stuff in our minds.
Like if you're solving some big math problem, if you're having a complicated technical conversation on a podcast, like those are the things we have to keep all those pieces, all those puzzle pieces in your head.
If you're...
doing a well-rehearsed task, if you are an Olympic swimmer and you're swimming with perfect form and you're like right there in the zone, like people even say, it's in the moment.
It's in the moment, right?
Like it's like you've practiced it so much, you've baked it into your neural network and your brain that you don't have to think carefully about keeping all that context.
So it really is just
Morvick's paradox manifesting itself.
But that doesn't mean that we don't need the memory.
It just means that if we want to match the level of dexterity and physical proficiency that people have, there's other things we should get right first, and then gradually go up that stack into the more cognitively demanding areas, into reasoning, into context, into planning, all that kind of stuff.
And that stuff will be important too.
And how physically will, so you have, you have this like trilemma.
You have three different things, which all take more
compute during inference that you want to opt, you want to increase at the same time.
You have the inference speed.
And so humans are processing 24 frames a second or whatever it is.
We're just like, we can react to things extremely fast.
Then you have the context length.
And for, I think, the kind of robot, which is just like cleaning up your house, I think it has to kind of, it has to be aware of like things that happened minutes ago or hours ago and how that influences its plan about the next task it's doing.
And then you have the model size.
And I guess at least with LLMs, we've seen that there's gains from increasing the amount of
parameters.
And I think currently you have 100 millisecond
inference speeds, you have a second long context, and then the model is what, a couple billion parameters, how many?
Okay.
And so each of these, at least two of them, are many orders of magnitude smaller than what seems to be the human equivalent, right?
Like the model, if a human brain has like trillions of parameters, and this has like 2 billion parameters, and then if humans are processing at least as fast as the model, like actually a decent bit faster, and we have hours of context.
It depends on how you define human context, but hours of context, minutes of context.
Sometimes decades of context.
Yeah, exactly.
So you have to have many order of magnitude improvements across all of this, all of these three things, which seem to oppose each other or like increasing one reduces the amount of
reduces the amount of compute you can dedicate towards the other one in inference.
So
how are we going to, yeah, how are we going to solve this?
Yeah.
Well, that's a very big question.
Yeah,
let's try to unpack this a little bit.
I think there's a lot going on in there.
One thing that I would say is a really interesting technical problem, and I think that it's something where we'll see perhaps a lot of really interesting innovation over the next few years is the question of representation for context.
So
if you imagine
some of the examples you gave, like if you have a home robot that's doing something and needs to keep track, as a person, there's certainly some things where you keep track of them very
symbolically, like almost in language.
Like, you know, I have my checklist.
Like I'm going shopping and I, you know, at least for me, I can like literally visualize in my mind like my checklist, like, you know, pick up the yogurt, pick up the milk, pick up whatever.
And that, and I'm not like picturing the milk shelf with the milk sitting there.
I'm just thinking like milk, right?
But then there's other things that are much more spatial, almost visual.
You know, when I was trying to get to your studio, I was thinking like, okay,
here's what this street looks like.
Here's what that street looks like.
Here's what I expect the doorway to look like.
So representing your context in the right form that captures what you really need to achieve your goal and otherwise kind of discards all the unnecessary stuff.
I think that's like that's a really important thing.
And I think we're seeing the beginnings of that with multimodal models.
But I think that multimodality has so much more to it than just like image plus text.
And I think that that's a place where there's a lot of room for really exciting innovation.
Ooh, do you mean in terms of
how we represent?
Okay.
Yeah, how we represent both context, both what happened in the past, and also plans or reasoning, as you can call it in LM world, which is what we would like to happen in the future, or intermediate processing stages in solving a task.
I think doing that in a variety of modalities, including potentially learned modalities that are suitable for the job, is something that has, I think, enormous potential to overcome some of these challenges.
Interesting.
Another question I have
as we're discussing these
tough trade-offs in terms of
inference is comparing it to the human human brain and figuring out the human brain is able to have hours, decades of context while being like
being able to act on the order of 10 milliseconds while having 100 trillion parameters or however you want to count it.
And I wonder if the best way to understand what's happening here is that human brain hardware is just way more advanced than the hardware we have in GPUs, or that the algorithms for encoding video information are like way more efficient.
And maybe it's like some crazy mixture of experts where
the active parameters is also on the order of billions, a little billions, or some mixture of the two.
Basically, if you had to think about why do we have these models that are across many dimensions, orders of magnitude, less efficient, is it hardware or algorithms
compared to the brain?
Yeah, that's a really good question.
So I definitely don't know the answer to this.
I am not by any means well versed in neuroscience, but if I had to guess and also provide an answer that leans more on things I know, it's something like this, that the brain is extremely parallel.
It kind of has to be just
because of the biophysics.
But
it's even more parallel than your GPU.
If you think about how a modern multimodal language model processes the input, if you give it some images and some text, like first it reads in the images, then it reads in the text, and then proceeds one token at a time to generate the output.
It makes a lot more sense to me for an embodied system to have parallel processes.
Now, mathematically, you can actually make close equivalences between parallel and sequential stuff.
Like transformers aren't actually fundamentally sequential.
Like you kind of make them sequential by putting in position embeddings.
Transformers are fundamentally actually very parallelizable things.
That's what makes them so great.
So I don't think that actually, mathematically, this highly parallel thing where you're doing perception and proprioception and planning all at the same time
actually necessarily needs to look that different from a transformer, although its practical implementation will be different.
And you could imagine that the system will, in parallel, think about: okay, here's my long-term memory, like here's what I've seen a decade ago, here's my short-term kind of spatial stuff, here's my semantic stuff, here's what I'm seeing now, here's what I'm planning.
And all of that can be implemented in a way that there's some very familiar kind of attentional mechanism, but in practice, all running in parallel, maybe at different rates, maybe with the more complex things running slower, the faster reactive stuff running faster.
I'm sure you've been seeing a bunch of fun images that people have been generating with Google's new image generation model, Nanobanana.
My XFeed is full of wild images.
But you might not realize that this model can also help you do less flashy tasks like restoring historical pictures or even just cleaning up images.
For example, I was reading this old paper back as I was prepping to interview Sarah Payne, and it had this really great graph of World War II allied shipping that I wanted to overlay in the lecture.
Now, in the past, this would have taken one of my editors 20 or 30 minutes to digitize and clean up manually.
But now, we just took a photo of the page and then dropped into NanoBanana and got back a clean version.
This was a one-shot, but if NanoBanana doesn't nail it on the first attempt, you can try to just go back and forth with it until you get a result that you're super happy with.
We keep finding new use cases for this model.
And honestly, this is one of those tools that just doesn't feel real.
Check out Gemini 2.5 Flash Image Model, aka Nanobanana, on both Google AI Studio and the Gemini app.
All right, back to Sergei.
If in five years we have a system which is like as robust as a human in terms of interacting with the world, then
what has happened that makes it physically possible to be able to run those kinds of models, to have video information that is streaming at real time, or hours of prior video information is somehow being encoded and considered while decoding in like a millisecond scale and with many more parameters.
Is it just that like NVIDIA has shifted much better GPUs or that you guys have come up with much better
encoders and stuff?
Or what's happened in the five years?
I think there are a lot of things to this question.
I think certainly there's a really fascinating systems problem.
I'm by no means a systems expert, but I would imagine that the right architecture in practice, especially if you want an affordable low-cost system, would be to externalize at least part of the thinking.
You could imagine maybe in the future you'll have a robot that has like, you know, if your internet connection is not very good, the robot is in kind of like a dumber reactive mode.
But if you have a good internet connection, then it can like be a little smarter.
It's pretty cool.
But I think there are also research and algorithms things that can help here,
like figuring out the right representations, concisely representing both
your past observations, but also changes in observation, right?
Like, you know, your sensory stream is extremely temporally correlated, which means that the marginal information gained from each additional observation is not the same as the entirety of that observation.
Because the image that I'm seeing now is very correlated to the image I saw before.
So in principle, I want to represent it concisely, I can get away with a much more compressed representation than if I represent the images independently.
So there's a lot that can be done on the algorithm side to get this right, and that's really interesting algorithms work.
I think there's also like a really fascinating systems problem.
To be truthful, I haven't gotten to the systems problem because you want to implement the system once you sort of know the shape of the machine learning solution.
But I think there's a lot of cool stuff to do there.
Yeah, maybe you guys need to hire the people who run the YouTube data centers because they know how to encode video information.
Okay, this actually raises an interesting question, which is that with LLMs, of course,
they're being
theoretically, you could run your own model on this laptop or whatever, but realistically, what happens is that the largest, most effective models are being run in batches
of thousands, millions of users at the same time,
not locally.
Will the same thing happen in robotics because of the inherent efficiencies of batching, plus the fact that we have to do this incredibly
compute-intensive inference task?
And so you don't want to be carrying around like,
you know, like $50,000 GPUs per robot or something.
You just want that to happen somewhere else.
So yeah,
this robotics world, should we just be anticipating something where you need connectivity everywhere?
You need robots that are like, have like super fast, and you're streaming video information back and forth, right?
Or at least video information one way.
So
does that have interesting implications about
how this deployment of robots will actually be instantiated?
I don't know.
But if I were to guess, I would guess that we'll actually see both.
That we'll see low-cost systems with off-board inference and more reliable systems, for example, in settings where, like if you have an outdoor robot or something where you can't rely on connectivity, that are costlier and have onboard inference.
A few things I'll say from a technical standpoint that might contribute to understanding this.
While a real-time system obviously needs to be controlled in real time, often at high frequency, the amount of thinking you actually need to do for every time step might be surprisingly low.
And again, we see this in humans and animals.
When we
plan out movements, there is definitely a real planning process that happens in the brain.
Like if you record
from a monkey brain, you will actually find neural correlates of planning.
And there is something that happens in advance of a movement.
And when that movement actually takes place, the shape of the movement correlates with what happened before the movement.
Like that's planning, right?
So that means that you put something in place and you know, set the initial conditions of some kind of process and then unroll that process and that's the movement.
And that means that during that movement, you're doing less processing and you kind of batch it up in advance.
But
you're not like entirely an open loop.
It's not like you're playing back a tape recorder.
You are actually reacting as you go.
You're just reacting at a different level of abstraction,
a more basic level of abstraction.
And again, this comes back to representations.
Figure out which representations are sufficient for kind of planning in advance and then unrolling, which representations require a tight feedback loop.
And for that tight feedback loop, like what it, what is it, what are you doing feedback on?
Like, you know, if I'm driving a vehicle, maybe I'm doing feedback on the position of a lane marker so that I stay straight.
And then at a a lower frequency, I sort of gauge where I am in traffic.
And then so you have a couple lectures from a few years back where you say, like, even for robotics, RL is in many cases better than imitation learning.
But so far, the models are exclusively doing imitation learning.
So I'm curious.
how
your thinking on this has changed, or maybe it's not changed, but then you need to do this for the RL.
Like, why can't you enter do RL yet?
Yeah.
So the key here is prior knowledge.
So in order to effectively learn from your own experience, it turns turns out that it's really, really important to already know something about what you're doing.
Otherwise, it takes far too long.
It's just like it takes
a person when they're a child a very long time to learn very basic things, to learn to write for the first time, for example.
Once you already have some knowledge, then you can learn new things very quickly.
So
the purpose of...
Training the models with supervised learning now is to build out that foundation that provides the prior knowledge so they can figure things out much more quickly later.
And again, this is not a new idea.
This is exactly what we've seen with LLMs, right?
LLMs started off being trained purely with next token prediction, and that provided an excellent starting point, first for all sorts of synthetic data generation, and then for RL.
So I think it makes total sense that we would expect basically any foundation model effort to follow the same trajectory where we first build out the foundation.
essentially in like a somewhat brute force way.
And the stronger that foundation gets, the easier it is to then make it even better with much more accessible training.
In 10 years, will the best model for knowledge work also be a robotics model or have a action expert attached to it?
And the reason I ask is like,
so far we've seen advantages from using more general models for things.
And will robotics fall into this bucket of we will just have the model, which does everything, including physical work and knowledge work?
Or do you think they'll continue to stay separate?
I really hope that they will actually be the same.
And, you know, obviously I'm extremely biased.
I love robotics.
I think it's like it's very fundamental to AI.
But I think that it's optimistically that it's actually the other way around, that the robotics
element of the equation will make all the other stuff better.
And there are two
reasons for this that I could tell you about.
One has to do with representations and focus.
So what I said before, with video prediction models, if you just want to predict everything that happens, it's very hard to figure out what's relevant.
If you have the focus that comes comes from actually trying to do a task, now that acts to structure how you see the world in a way that allows you to more fruitfully utilize the other signals.
That could be extremely powerful.
The second one is that.
Understanding the physical world at a very deep, fundamental level, at a level that goes beyond just what we can articulate with language, can actually help you solve other problems.
And we experience this all the time.
Like when we talk about abstract concepts, we say like, this company has a lot of momentum.
Yeah.
Right?
We'll use like social metaphors to describe inanimate objects.
Like my computer hates me.
We experience the world in a particular way and our subjective experience shapes how we think about it in very profound ways.
And then we use that as a hammer to basically hit all sorts of other nails that are far too abstract to handle any other way.
I guess, but there might be other considerations that are relevant to physical robots in terms of like inference speed and model size, et cetera, which might be different than the considerations for knowledge work.
But then maybe you can, maybe that doesn't change.
Maybe it's still the same model, but then you can serve it in different ways.
And the advantages of co-training are high enough that,
yeah, whenever I'm like, I'm wondering in five years, if I'm using a model to code for me, does it also know how to do robotic stuff?
And yeah, maybe the advantages of co-training on robotics are high enough that it's worth.
Well, and I should say that the coding is probably like the pinnacle of
abstract knowledge work in the sense that just by the mathematical nature of computer programming, it's an extremely abstract activity which is why people struggle with it so much yeah i'm a bit confused about why simulation doesn't work better for uh robots like if i look at humans smart humans do a good job of
if they're intentionally trying to learn noticing what about the simulation is similar to real life and paying attention to that and learning from that.
So if you have like pilots who are learning in simulation or F1 drivers who are learning in simulation, should it be expected to be a case that as robots get smarter, they will will also be able to learn more things through simulation?
Or is this cursive?
We need real world data forever?
This is a very subtle question.
Your example with the airplane pilot using simulation is really interesting.
But something to remember is that
when a pilot is using a simulator to learn to fly an airplane, they're extremely goal-directed.
So their goal in life is not to learn to use a simulator.
Their goal in life is to learn to fly the airplane.
They know there will be a test afterwards, and they know that eventually they'll be in charge of like a few hundred passengers, and they really need to not crash that thing.
And when we train
models on data from multiple different domains, the models don't know that they're supposed to solve a particular task.
They just see like, hey, here's one thing I need to master.
Here's another thing I need to master.
So maybe a better analogy there is if you're playing a video game where you can fly an airplane, and then eventually someone puts you in the cockpit of a real one.
It's not that the video game is useless, but it's not the same thing.
And if you're trying to play that video game and your goal is to really master the video game, you're not going to go about it in quite the same way.
Can you do some kind of meta RL on this, which is almost identical actually to the, there's this really interesting paper you wrote in 2017 where maybe the loss function is not how well it does at a particular video game or particular simulation, but how well being trained in different video games makes it better at some other downstream task.
I did a terrible job explaining, but I understand what you mean.
Yeah, yeah.
Okay, maybe, can you do a better job explaining what I was trying to say?
So So I think what you're trying to say is basically that, well, maybe if we have a really smart model that's doing metal learning, perhaps it can figure out that its performance on a downstream problem, a real world problem, is increased by doing something in a simulator.
And then specifically, make that the loss function, right?
That's right.
But here's the thing with this.
There's a set of these ideas that are all going to be something like...
train to make it better on the real thing by leveraging something else.
And the key linchpin for all of that is the ability ability to train it to be better on the real thing.
The thing is, I actually suspect in reality, we might not even need to do something quite so explicit because
meta-learning is emergent, as you pointed out before, right?
Like LLMs essentially do a kind of meta-learning via in-context learning.
I mean, we can debate as to how much that's learning or not, but the point is that large, powerful models trained on the right objective on real data get much better at leveraging all the other stuff.
And I think that's actually the key.
And coming back to your airplane pilot, like the airplane pilot is trained on a real world objective like their objective is to be a good airplane pilot to be successful to have a good career and all of that kind of propagates back into the actions they take in leveraging all these other data sources so what i think is actually the key here to leveraging auxiliary data sources including simulation is to build the right foundation model that is really good that has those immersion abilities
and to your point uh
To get really good like that, it has to have the right objective.
Now, we know how to get the right objective out of real-world data.
Maybe we can get it out of other things, but that's harder right now.
And I think that, again, we can look to the examples of what happened in other fields.
Like these days, if someone trains an LLM for solving complex problems, they're using lots of synthetic data.
But the reason they're able to leverage that synthetic data effectively is because they have this starting point that is trained on lots of real data that kind of gets it.
And once it gets it, then it's more able to leverage all this other stuff.
So I think perhaps ironically, the key to leveraging other data sources, including simulation, is to get really good at using real data, understand what's up with the world, and then now you can fruitfully use all sorts of things.
So once we have this,
in 2035, 2030,
basically the sci-fi world,
are you optimistic about the ability of true AGIs to build simulations in which they are rehearsing skills that no human or AI has ever had a chance to practice before?
They need to practice via astronauts because we're building the Dyson Sphere, and they can just do that in simulation.
Or will the issue with with simulation continue to be one, regardless of how smart the models get?
So here's what I would say: that
deep down at a very fundamental level,
the synthetic experience that you create yourself doesn't allow you to learn more about the world.
It allows you to rehearse things, it allows you to consider counterfactuals, but somehow
information about the world needs to get injected into the system.
So,
and I think the way you pose this question actually elucidates this very nicely, because
in robotics, classically, people have often thought about simulation as a way to inject human knowledge, because a person knows how to write down differential equations, they can code it up, and that gives the robot more knowledge than it had before.
But I think that increasingly what we're learning from experiences in other fields, from how the video generation stuff goes, from synthetic data for LLMs, is that actually probably the most powerful way to create synthetic experience is from a really good model, because the model probably knows more than a person does about those fine-grained details.
But then, of course, where does that model get the knowledge?
From experiencing the world.
So, in a sense, what you said, I think, is actually quite right in that a very powerful AI system can simulate a lot of stuff.
But also, at that point, it kind of almost doesn't matter because, viewed as a black box, what's going on with that system is that information comes in and capability comes out.
And whether the way to process that information is by imagining some stuff and simulating or by some model-free method
is kind of irrelevant in understanding its games.
Do you make sense sense of what the equivalent is in humans?
Like, whatever we're doing when we're daydreaming or sleeping, or
I don't know if you have some sense of what this auxiliary thing we're doing is, but if you had to make an ML analogy for it, what is it?
Aaron Ross Powell, Jr.: Well,
yeah, I mean, certainly
when you sleep, your brain does stuff that looks an awful lot like what it does when it's awake, that looks an awful lot like playing back experience or perhaps generating a new statistically similar experience.
And
so I think it's very reasonable to guess that perhaps simulation through a learned model is part of how your brain figures out counterfactuals, basically.
But something that's kind of even more fundamental than that is that
optimal decision-making at its core, regardless of how you do it, requires considering counterfactuals.
You basically have to ask yourself, if I did this instead of that, would it be better?
And you have to answer that question somehow.
And whether you answer that question by using a learned simulator, or whether you answer that question by using a value function or something like that, by using a reward model, in the end, it's kind of all the same.
Like as long as you have some mechanism for considering counterfactuals and figuring out which counterfactual is better, you've got it.
So
I like thinking about it this way because it kind of simplifies things.
It tells us that the key is not necessarily to do a really good simulation.
The key is to figure out how to answer counterfactuals.
Yeah, interesting.
So a stepping big picture again,
the reason I'm interested in getting concrete understanding of when this robot economy economy will be deployed is because it's actually pretty relevant to understanding how fast AGI will proceed in the sense that, well, it's obviously the data flight wheel, but also if you just extrapolate out the capex for AI, suppose by 2030, you know, people have different estimates, but many people have estimates in the hundreds of gigawatts, 100, 200, 300 gigawatts.
And then you can just like crunch numbers on, like, if you have 200 gigawatts deployed or 100 gigawatts deployed by 2030,
the marginal capex per year is like trillions of dollars.
It's like two, three, four trillion dollars a year.
And that corresponds to actual data centers you ought to build, actual
chip foundries you have to build, actual solar panel factories you ought to build.
And I am very curious about whether by this time, by 2030,
if the big bottleneck we have is just like people
to like lay out the solar panels next to the data center or assemble the data center, whether the robot economy will be mature enough to
help significantly in that process.
That's cool.
So you're basically saying like, how much concrete should I buy now to build the data center so that by 2030 I can power all the robots?
Yeah, yeah.
That is a more ambitious way of thinking about it than has occurred to me, but it's a cool question.
I mean, the good thing, of course, is that the robots can help you build that stuff.
Right.
But will they be able to by the time that like,
there's some flight,
like there's the non-robotic stuff, which will also like mandate a lot of CapEx.
And then there's robot stuff, you actually have to build robot factories, et cetera.
But everybody just thinks there will be this industrial explosion across the whole stack.
And how much will robotics be able to speed that up or make it possible?
Aaron Powell, Jr.: I mean, in principle, quite a lot, right?
I think that we have a tendency sometimes to think about robots as like mechanical people.
But that's not the case, right?
Like people are people and robots are robots.
The better analogy for the robot, it's like your car or a a bulldozer.
Like it has much lower maintenance requirements.
You can put them into all sorts of weird places and they don't have to look like people at all.
You can make a robot that's 100 feet tall.
You can make a robot that's tiny.
So I think that
if you have the intelligence to power very heterogeneous robotic systems, you can probably actually do a lot better than just having like, you know, mechanical people in effect.
And it can be a big productivity boost for the real people.
And it can allow you to solve problems that are very difficult to solve now.
You can, you know, for example, I'm not an expert on data centers by any means, but you could build your data centers in a very remote location because the robots don't have to worry about whether there's like a shopping center nearby.
And then do you have a sense of how, so there's like, where will the software be?
And then there's a question of how many physical robots will we have?
So like, how many of the kinds of robots you're training in physical intelligence, like these tabletop arms, are there physically in the world?
How many will there be by 2030?
How many will be needed?
I mean, these are tough questions.
Like how many will be needed for the world?
Yeah, these are very tough questions.
And also, you know,
economies of scale in robotics so far have not functioned the same way that they probably would in the long term, right?
Just to give you an example, when I started working in robotics in 2014, I used a very nice research robot called a PR-2 that cost $400,000 to purchase.
When I started my research lab at UC Berkeley, I bought robot arms that were $30,000.
The kind of robots that we are using now at Physical Intelligence, each arm costs about $3,000, and we think they can be made for a small fraction of that.
So these things.
What is the cause of that learning rate?
Well, there are a few things.
So, one, of course, has to do with economies of scale.
So, custom-built, high-end research hardware, of course, is going to be much more expensive than
kind of more productionized hardware.
But the other, and then, of course, there's a technological element that as we get better at building actuated machines, they become cheaper.
But there's also a software element, which is the smarter your AI system gets,
the less you need the hardware to satisfy certain requirements.
So traditional robots and factories, they need to make motions that are highly repeatable, and therefore it requires a degree of precision and robustness that you don't need if you can use cheap visual feedback.
So AI also makes robots more affordable and lowers the requirements on the hardware.
Aaron Powell, interesting.
Okay, so do you think the learning rate will continue?
Do you think it will cost hundreds of dollars by the end of the decade to buy mobile arms?
Aaron Powell, that is a great question for my co-founder, Adnan Esmail, who is probably like the best person arguably in the world to ask that question of.
But certainly the drop in costs that I've seen has surprised me year after year.
Okay.
And how many arms are there probably in the world?
Is it more than a million, less than a million?
Aaron Powell, so I don't know the answer to that question, but it's also a tricky question to answer because not all arms are made equal.
Like arguably the kind of robots that are like assembling cars in a factory are just not the right kind to think about.
So the kind you want to train on?
Very few because they are not currently commercially deployed on like the factory robots.
So like less than 100,000.
I don't know, but probably.
Okay.
And we want
billions
of robots, like at least millions of robots.
If you're just thinking about like the industrial explosion that you need to
have this AI explosive growth,
not only do you need the arms, but then you need like something that can move around.
Basically, I'm just trying to think about like, will that be possible by the time that you need a lot more labor to power this
AI boom?
Well, you know, economies are very good at filling demand when there's a lot of demand, right?
Like how many iPhones were in the world in 2001, right?
That's right.
So I think there's definitely a challenge there.
And I think it's something that is worth thinking about.
And a particularly important question for researchers like myself is how can AI affect how we think about hardware?
Because there are some things that I think are going to be really, really important.
Like you probably want your thing to not break all the time.
There's some things that are firmly in that category of question marks, like how many fingers do we need?
You said yourself before that you were kind of surprised that a robot with two fingers can do a lot.
Okay, maybe you still want like more than that, but still, like, finding the bare minimum that still lets you have good functionality, that's important.
That's in the question mark box.
And there are some things that I think, like, we probably don't need, like, we probably don't need the robot to be like super duper precise because we know that feedback can compensate for that.
So, I think my job as I see it right now is to figure out what's sort of the minimal package we can get away with.
And I really like to think about robots in terms of minimal package, because I don't think that we will have like the one ultimate robot, like sort of the mechanical person basically.
I think what we will have is a bunch of things that good effective robots need to satisfy, just like good smartphones need to have a touch screen, like that's something that we all kind of agreed on.
And then a bunch of other stuff that's kind of optional depending on the need, depending on the cost point, et cetera.
And I think there will be a lot of innovation where once we have very capable AI systems that can be plugged into any robot to endow it with some basic level of intelligence, then lots of different people can innovate on how to get the robot hardware to be optimal for each niche it needs to be in the market.
In terms of manufacturers, is there some NVIDIA of robotics?
Not right now.
Maybe there will be someday.
I would really
like,
maybe I'm being idealistic, but I would really like to see a world where there's a lot of heterogeneity in robots.
What is the biggest bottleneck in the hardware today, as somebody who's designing the algorithms that run on it?
It's a tough question to answer, mainly because things are changing so fast.
I think that to me, the things that I spend a significant amount of time thinking about on the hardware side is really more like reliability and cost.
It's not that I'm that worried about cost, it's just that cost translates to number of robots, which translates to amount of data.
And being an ML person, I really like having lots of data, so I really like having robots that are low cost because then I can have more of them and therefore more data.
And reliability is important more or less for the same reason.
But I think it's something that we'll get more clarity on as things progress, because as we
basically, the AI systems of today are not pushing the hardware to the limit.
So as the AI systems get better and better, the hardware will get pushed to the limit, and then we'll hopefully have a much better answer to your question.
Okay, so this is a question I've had for a lot of guests
is that
if you go through any layer of this AI explosion, you find that
a bunch of the actual source supply chain is being manufactured in China.
So other than chips, obviously.
But then, you know,
if you talk about data centers and you're like, oh, all the wafers for solar panels and a bunch of the cells and modules, et cetera, are manufactured in China, then
you just go through the supply chain.
And then
obviously robot arms are being manufactured in China.
And so if you live in this world where
the
hardware is just incredibly valuable to ramp up manufacturing of, because each robot can produce some fraction of the value that a human worker can produce.
And not only is that that true, but the value of human workers or any kind of worker has just tremendously skyrocketed because we just need tons of bodies to lay out the tens of thousands of
acres of solar farms and data centers and
foundries and everything.
In this boom world, the big bottleneck is just like, how many robots can you physically deploy?
How many can you manufacture?
Because you guys are going to come up with the algorithms and now we just eat the hardware.
And so
this is a question I've asked many guests, which is that, like, if you look at the part of the chain that you are observing, what is the reason that China just doesn't win by default, right?
If they're producing all the robots and you come up with the algorithms that make those robots super valuable,
why don't they just win by default?
Yeah.
So this is a very complex question.
I'll start with the broader themes and then try to drill a little bit into the details.
So
one broader theme here is that if you want to have an economy where
you get ahead by having a highly educated workforce, by having people that have high productivity, meaning that for each person's hour of work, lots of stuff gets done, automation is really, really good.
Because automation is what multiplies the amount of productivity that each person has.
Again, same as like LLM coding tools.
LM coding tools amplify the productivity of a software engineer.
Robots will amplify the productivity of basically everybody that is doing work.
Now,
that's kind of like a final state, like a desirable final state.
Now, there's a lot of complexity in how you get to that state, how you make that
an appealing journey to society, how you navigate the geopolitical dimension of that.
Like all of that stuff is actually pretty complicated and it requires making a number of really good decisions, like
good decisions about investing in a balanced robotics ecosystem, supporting both software innovation and hardware innovation.
I don't think any of those are insurmountable problems.
It just requires
a degree of kind of
long-term vision and the right kind of balance of investment.
But what makes me really optimistic about this is that final state, that if I think we can all agree that in the United States we would like to have the kind of society where people are highly productive, where
we have highly educated people doing high-value work.
And because that end state seems to me very compatible with automation, with robotics, there's a lot of, at some level, there should be a lot of incentive to get to that state.
And then from there, we have to solve for all the details that will help us get there.
And that's not easy.
I think there's a lot of complicated decisions that need to be made in terms of private industry, in terms of investment, in terms of the political dimension.
But I'm very optimistic about it because it's like, it seems to me like the light at the end of the tunnel is kind of
in the right direction.
I mean, yeah, I guess there's a different question, which is that if the value is sort of bottlenecked by hardware, and so you just need to produce more hardware, what is the path by which hundreds of millions of robots or billions of robots are being manufactured in the US or with allies?
I don't know how to approach that question, but it seems like a different question than like, okay, well, what is the impact on like human wages or something?
Trevor Burrus, Jr.: So
again, for the specifics of how we make that happen, I think that's a very long conversation.
I'm probably not the most qualified to speak to.
But I think that in terms of the ingredients,
the ingredient here that I think is important is that robots help with
physical things, physical work.
And if producing robots is itself physical work, then getting really good at robotics should help with that.
It's a little circular, of course.
And as with all circular things, you have have to kind of bootstrap it and try to get that engine going.
But
it seems like it is an easier problem to address than, for example, the problem of digital devices, where work goes into creating computers, phones, et cetera, but the computers and phones don't themselves help with the work.
Right.
I guess feedback loops go both ways.
They can help you or they can help others.
And it's a positive some world, so it's not necessarily bad that they help others.
But
to the extent that a lot of of the things which would go into this feedback loop, the subcomponents manufacturing and supply chain already exist in China, it seems like the stronger feedback loop would exist in China.
And then there's a separate discussion, like maybe that's fine,
maybe that's good, and maybe they'll continue exporting this to us.
But it's just like notable that I just find it notable that whenever I talk to guests about different things, it's just like, oh yeah, that, you know, within a few years, the key bottleneck to every single part of the supply chain here will be something that China is like the 80% world supplier of something.
Well, yeah, and this is why I said before that I think something really important to get right here is a balanced robotics ecosystem.
I think AI is tremendously exciting, but I think we should also recognize that getting AI right is not the only thing that we need to do.
And we need to think about how to balance our priorities, our investment, the kind of things that we spend our time on.
Just as an example, at physical intelligence, we do take hardware very seriously, actually.
We build a lot of our own things,
and we want to have a hardware roadmap alongside our AI roadmap.
But I think that
that's just us.
I think that for the United States, for
arguably for human civilization as a whole, I think we need to think about these problems very holistically.
And I think it is easy to get distracted sometimes when there's a lot of excitement, a lot of progress in one area, like AI,
and we are tempted to lose track of other things, including things you've said, like, hey, like, you know, there's a hardware component,
there's an infrastructure component with compute and things like that.
So I think that in general, it's good to have a more holistic view of these things, and I wish we had more holistic conversations about that sometimes.
Aaron Powell, Jr.: I do think from the perspective of society as a whole, how should they be thinking about the advances in robotics and knowledge work?
And I think it's basically like, Society should be planning for full automation.
Like there will be a period in which people's work is way more valuable because there's this huge boom in the economy.
We're like building all these data centers or building all these factories.
But then eventually, humans can do things with their body and we can do things with our mind.
There's not like some secret third thing.
So, what should society be planning for?
It should be full automation of humans.
And there will also be a society being much wealthier.
So, presumably, there's ways to do this in a way that everybody is much better off than they are today.
But then, like, the end state, the light at the end of the tunnel, is the full automation, but plus super wealthy society with some redistribution or whatever way to figure that out, right?
I don't know if you disagree with that characterization.
So I think at some level, that's a very reasonable way to look at things.
But I think that if there's one thing that I've learned about technology, it's that
it rarely evolves quite the way that people expect.
And sometimes the journey is just as important as the destination.
So I think it's actually very difficult to plan ahead for an end state.
But I think directionally, what you said makes a lot of sense.
And I do think that it's very important for us collectively to think about how to structure the world around us in a way that is amenable to greater and greater automation across all sectors.
But I think we should really think about the journey just as much as the destination, because things evolve in all sorts of unpredictable ways.
And we'll find
automation showing up in all sorts of places, probably not the places we expect first.
So, you know, I think that the constants here that I think are really important is education is really, really valuable.
Like education is
the best buffer somebody has against the negative effects of change.
So if there's like one single lever that we can pull collectively as a society, it's like more education because that's true.
I mean, the Moravax paradox is like the things which are like most beneficial from education for humans might be the easiest to automate because it's really easy to educate AIs.
You know, you can throw the textbooks that would take you eight years of grad school to do at them in an afternoon.
Well, what education gives you is flexibility.
So it's less about the
particular facts you know as it is about your ability to acquire skills, acquire understanding.
So
it has to be good education.
Right.
Okay, Sergei, thank you so much for coming on the podcast.
Thank you.
Super fascinating.
Yeah,
this was intense.
Those tough questions.
I hope you enjoyed this episode.
If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it.
Send it to your friends, your group chats, Twitter, wherever else.
Just let the word go forth.
Other than that, super helpful if you can subscribe on YouTube and leave a five-star review on Apple Podcasts and Spotify.
Check out the sponsors in the description below.
If you want to sponsor a future episode, go to dwarcache.com/slash advertise.
Thank you for tuning in.
I'll see you on the next one.